API Getting Started Guide
This guide walks through common tasks using the Filebase S3-compatible API.

API Getting Started Guide

To download a PDF version of this guide, click here.

What is an API?

An Application Programming Interface, or API, is a software technology that allows two applications to communicate and interact with each other.
When an application connects to the internet, it sends a request to a server for data. The server receives the request, retrieves the requested data, then sends it back to the application. When the application receives the data, it interprets it and presents it in a readable format within the application for you to read.
Another way to think about APIs is to use a real-world example. If you’re at a restaurant for dinner, you have a menu of dishes to choose from. This menu acts as your application. When the waiter takes your order, the waiter acts as the messenger, or the API, and relays your order to the kitchen. The kitchen, which acts as the data server, receives your order, prepares it, and gives it back to your waiter to bring out to you. When the waiter brings you back your food, this is the API bringing back the data request to you.
Modern APIs have universal characteristics and attributes that make them not only widely usable and transferable, but valuable to developers and end-users.
Some of these characteristics include adhering to universal standards such as HTTP and REST, making them developer-friendly and understood broadly. APIs also have a strong discipline for standardization and governance, making them scalable and performant.

Getting Started

Filebase can be used through the console web interface found at https://console.filebase.com. The getting started guide on using the web interface can be found here.
While the web interface is necessary for functions such as viewing and rotating Filebase Access Keys or updating billing information, most interactions with the Filebase platform happen through API requests.
This guide will cover the Filebase S3-compatible API and common tools used to interact with the Filebase API.

Access Keys

To use the Filebase S3-compatible API, you will need to have your Filebase Access and Secret key pair to submit API requests.
To view the access key for your Filebase account, start by clicking on the ‘Access Keys’ option from the menu to open the access keys dashboard.
Here you can view the access keys for your account. Each access key has two parts, the key and the secret associated with the key. The access key dashboard will also provide information such as the time and date the access key was created and its current status.
To use access keys, you will need to have both the key and the secret associated with that key.

API endpoint

The Filebase S3-compatible API endpoint is https://s3.filebase.com.
This endpoint can be used with S3-compatible tools, SDKs, or frameworks to communicate with the Filebase platform.

Authentication

The Filebase S3-compatible API only supports AWS v4 signatures (AWS4-HMAC-SHA256) for authentication and does not support AWS v2 signatures.

HTTPS Protocol

Filebase maintains a strict HTTPS-only standard. This means objects and API calls are served only via HTTPS. The port for this connection is the HTTPS standard port 443.
It is not possible to disable this at this time. Requests sent via the HTTP protocol will be redirected to HTTPS.

Supported API Methods

The list below documents Filebase's currently supported S3 API methods:
  • AbortMultipartUpload
  • CompleteMultipartUpload
  • CreateBucket
  • CreateMultipartUpload
  • DeleteBucket
  • DeleteBucketCors
  • DeleteObject
  • GetBucketAcl
  • GetBucketCors
  • GetBucketLifecycle
  • GetBucketLifecycleConfiguration
  • GetBucketLocation
  • GetBucketLogging
  • GetBucketVersioning
  • GetObject
  • GetObjectAcl
  • HeadBucket
  • HeadObject
  • ListBuckets
  • ListObjects
  • ListObjectsV2
  • PutBucketAcl
  • PutBucketCors
  • PutObject
  • PutObjectAcl
  • UploadPart
When a response payload is present, all responses are returned using UTF-8 encoded XML.

Pre-signed URLs

The Filebase S3-compatible API supports pre-signed URLs for downloading and uploading objects. Pre-signed URLs can be generated in a number of ways including the AWS CLI and the AWS SDKs.
For more information on Pre-signed URLS, see our guide here.

Access Control Lists (ACLs)

The Filebase S3-compatible API features limited support for Access Control Lists (ACLs). Object-level ACLs are currently not supported.
The GetObjectAcl and GetBucketAcl methods will work as expected, but the GetObjectAcl response will return the ACL of the bucket that the object is contained in.
This design eliminates the possibility of a user accidentally making an object public within a private bucket. If a mix of private and public objects is required for your workflow, you will need to create two different buckets.

Cross-Origin Resource Sharing (CORS)

Cross-origin resource sharing (CORS) creates a way for client web applications located on one domain to have the ability to interact with resources located on a different domain. With CORS, websites and applications can access files and resources stored on Filebase buckets.
The Filebase S3-compatible API supports CORS configurations for buckets.
The following API methods are supported:
  • GetBucketCors
  • PutBucketCors
  • DeleteBucketCors
To configure a Filebase bucket to allow cross-origin requests, you will need to create a CORS rule. This rule identifies the origins that you will allow to access your bucket, the HTTP methods that will be supported for each origin, and other operation-specific information.
This rule can be a JSON or XML file, though if using AWS CLI to apply this rule, a .json file is required. This example is a wildcard rule that allows cross-origin GET requests from all origins.
Example #1 JSON: This example is a wildcard rule that allows cross-origin GET requests from all origins.
1
{
2
"CORSRules":[
3
{
4
"AllowedHeaders": [],
5
"AllowedMethods": [
6
"GET"
7
],
8
"AllowedOrigins": [
9
"*"
10
],
11
"ExposeHeaders": []
12
}
13
]
14
}
Copied!
Example #1 XML: This example is a wildcard rule that allows cross-origin GET requests from all origins.
1
<CORSConfiguration>
2
<CORSRule>
3
<AllowedOrigin>*</AllowedOrigin>
4
<AllowedMethod>GET</AllowedMethod>
5
</CORSRule>
6
</CORSConfiguration>
Copied!
CORS also allows optional configuration parameters, as shown in the following CORS rule.
  • MaxAgeSeconds: Specifies the amount of time in seconds that the browser caches a response to a preflight OPTIONS request for the specified resource.
  • ExposeHeader: Identifies the response headers that customers are able to access from their applications. In this example, x-amz-server-side-encryption, x-amz-request-id, and x-amz-id-2.
Example #2 JSON: In this second example, the CORS rule allows cross-origin PUT, POST, and DELETE requests from the http://www.example.com origin, with a MaxAgeSeconds of 3000 and ExposeHeaders of x-amz-server-side-encryption, x-amz-request-id, and x-amz-id-2.
1
{
2
"CORSRules": [
3
{
4
"AllowedHeaders": [
5
"*”
6
],
7
"AllowedMethods": [
8
"PUT",
9
"POST",
10
"DELETE"
11
],
12
"AllowedOrigins": [
13
"http://www.example.com"
14
],
15
"ExposeHeaders": [
16
"x-amz-server-side-encryption",
17
"x-amz-request-id",
18
"x-amz-id-2"
19
],
20
"MaxAgeSeconds": 3000
21
}
22
]
23
}
Copied!
Example #2 XML:
1
<CORSConfiguration>
2
<CORSRule>
3
<AllowedOrigin>http://www.example.com</AllowedOrigin>
4
<AllowedMethod>PUT</AllowedMethod>
5
<AllowedMethod>POST</AllowedMethod>
6
<AllowedMethod>DELETE</AllowedMethod>
7
<AllowedHeader>*</AllowedHeader>
8
<MaxAgeSeconds>3000</MaxAgeSeconds>
9
<ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
10
<ExposeHeader>x-amz-request-id</ExposeHeader>
11
<ExposeHeader>x-amz-id-2</ExposeHeader>
12
</CORSRule>
13
</CORSConfiguration>
Copied!

Applying a CORS Rule to Filebase Bucket

To apply a CORS rule, you can use a tool such as the AWS CLI to apply the .json file you created containing the rule. For information on how to configure AWS CLI, see here.
From the command line, enter the following command to apply the CORS rule to the intended Filebase bucket:
1
aws --endpoint https://s3.filebase.com s3api put-bucket-cors \
2
--bucket bucket-name \
3
--cors-configuration=file://corspolicy.json
Copied!

Testing the CORS Configuration

You can confirm that the CORS configuration for the bucket was applied successfully by using the command:
1
aws --endpoint https://s3.filebase.com s3api get-bucket-cors \
2
--bucket bucket-name
Copied!

Using the API

There are a wide variety of ways to interact with and use the Filebase S3-compatible API. This guide will provide a few common examples, though there are many varieties of tools, frameworks, and SDKs that are supported. For a complete list of our tool documentation, please refer to https://docs.filebase.com.

Postman

Postman is an API platform for building, developing, and using APIs.
To use Postman with Filebase, you will need a Postman account, have your Filebase access and secret keys, and have created a Filebase bucket.

First, log in to your Postman account.

Select ‘Workspaces’ from the top menu navigation bar, and select an existing workspace or create a new one.

Enter the desired settings for a new workspace if creating one.

In your workspace, select the ‘Collections’ tab on the left navigation bar, then select ‘New’.

Select ‘HTTP Request’.

Configure the settings for the HTTP Request. Select the ‘GET’ HTTP Request type, followed by the URL of your Filebase bucket.

The URL format for Filebase buckets is as follows, where ‘bucket-name’ is the name of your Filebase bucket:

Then, select the Authorization tab. Configure the following parameters:

Type: AWS Signature
Add Authorization Data To: Request URL
Access Key: Your Filebase Access Key
Secret Key: Your Filebase Secret Key
AWS Region: us-east-1
Service Name: s3
Session Token: Not required, only necessary if using temporary credentials.

Select the blue ‘Send’ button to test your configuration. You should receive a response in XML format listing the bucket contents and metadata.

S3cmd

S3cmd is a command Line S3 Client and Backup for Linux and Mac OSx.
To use S3cmd, download the application and have your Filebase access and secret key.
To configure S3cmd, run the command:
s3cmd --configure
You will be prompted to fill out the following information:
  • Access Key: Filebase Access Key
  • Secret Key: Filebase Secret Key
  • Default Region: us-east-1
  • S3 Endpoint: s3.filebase.com
  • Bucket Name: Filebase Bucket Name
  • Encryption Password: Unique password
  • Path to GPG Program: If stored in default system location, press enter to confirm.
  • Use HTTPS Protocol: Yes
  • HTTP Proxy Server Name: Enter to bypass.
You will see a summary of these settings and be prompted to test access to Filebase with these settings. Once tested, you will be prompted to save the settings. Then you’re ready to start using S3cmd.
You can use S3cmd with commands such as the following:
  • Make bucket: s3cmd mb s3://BUCKET
  • Remove bucket: s3cmd rb s3://BUCKET
  • List objects or buckets: s3cmd ls [s3://BUCKET[/PREFIX]]
  • List all objects in all buckets: s3cmd la
  • Put file into bucket: s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
  • Get file from bucket: s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
  • Delete file from bucket: s3cmd del s3://BUCKET/OBJECT
  • Synchronize a directory tree to S3: s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR
  • Disk usage by buckets: s3cmd du [s3://BUCKET[/PREFIX]]
  • Get various information about Buckets or Files: s3cmd info s3://BUCKET[/OBJECT]
  • Copy object: s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
  • Move object: s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]

Cyberduck

CyberDuck is a free cloud storage browser for Windows and Mac OSX that supports Filebase, FTP, SFTP, and other cloud storage services.
To use CyberDuck with the Filebase S3-compatible API, download and install CyberDuck, have your Filebase access and secret key, and have a Filebase bucket created.

To set up CyberDuck, start by downloading the preconfigured Filebase Profile for CyberDuck.

Then open the Filebase Profile for CyberDuck. You will see the following window:

The server name and URL are preconfigured. You will need to provide your Filebase S3 API Access Key and S3 Secret Key, found in the Filebase web console under Access Keys.

After entering your credentials, close the profile window and open the CyberDuck browser. You should see your Filebase connection.

Double click on the s3.filebase.com connection to connect. You should then see a window appear that lists your existing Filebase buckets.

Double click a bucket to open the contents of the bucket.

From this window, you can click and drag files and folders from other windows to be uploaded to your Filebase bucket.

You can monitor the upload progress through the CyberDuck Transfer window.

You can confirm that the files are reflected in your Filebase bucket through the Filebase web console.

These tools are only a small sample of the full list of supported S3-compatible API tools. For a complete list of our tool documentation, please refer to https://docs.filebase.com.

Glossary of Terms

Access Key: When accessing Filebase through the S3 API, an access key is required to access your Filebase buckets and objects. Your access key has two components - the key and the secret. You will need both, similar to using a username and password.
API: Application Programming Interface; A software intermediary that allows two applications to communicate to one another.
AWS: Amazon Web Services; A component of Amazon’s cloud computing service.
Bucket: In object storage, buckets are similar to a traditional file system’s folders. Buckets are containers for objects and the associated metadata of those objects. Unlike traditional file systems, buckets cannot be nested into one another like file folders can be.
End-User: Any individual that uses a product or service.
Erasure Coding: A method of data protection in which data is broken into fragments (also called chunks or shards), encrypted and encoded, then distributed across different locations.
Geo-redundancy: The practice of storing data across multiple data centers or physical locations throughout different geographic locations.
IPFS: InterPlanetary File System; A peer-to-peer network for sharing and storing data over a distributed file system.
Metadata: In object storage, metadata is fully customizable and functional for objects, allowing you to capture application or user-specific information for more specific indexing purposes and data management policies.
Nodes: A compute server or other communication endpoint.
Peer-to-Peer Networks: A network topology where a group of nodes that are connected together have equal responsibility, permissions, and access to resources.
Object: In object storage, objects are similar to a traditional file system’s files. They are small entities that contain data and metadata.
Object Storage: A computer storage architecture that stores data in the form of objects, as opposed to file system storage architectures that store data as files located within a hierarchy.
Sharding: Separating data objects into small, individual units referred to as shards.
Shards: a partition or piece of a data file.
Sia: an open-source decentralized storage network.
Skynet: a “layer-2” decentralized storage platform that leverages the Sia network.
Storj: an open-source decentralized cloud storage network.
S3: S3 technology was originally created by Amazon, but it has since become an open standard. Since then, several companies have created their own non-Amazon based version of the API. All references to S3 in this document refer to the S3 API.
If you have any questions, please join our Discord server, or send us an email at [email protected]