What Is The Difference Between IPFS and Sia?
This page details the differences between the currently supported networks utilized by Filebase.
Filebase currently utilizes the IPFS and Sia decentralized networks. Each of these networks encrypts all data stored on them with erasure coding technology and has native geo-redundancy. Each network, however, is better suited for certain use cases than others based on their underlying frameworks and technologies.


InterPlanetary File System, or IPFS, is a distributed and decentralized storage network for storing and accessing files, websites, data, and applications. IPFS uses peer-to-peer network technology to connect a series of nodes located across the world that make up the IPFS network.
There are three pieces of IPFS that are unique to IPFS:
  • Unique Data Identification via Content Addressing: Data stored on IPFS is located through its content address rather than its physical location. When data is stored on IPFS, it is stored in a series of encrypted pieces, with each piece having its own unique content identifier or hash. This unique identifier is referred to as the CID. This hash serves as an identifier and links the piece to all the other pieces of that data.
  • Content Linking via Directed Acyclic Graphics (DAGs): DAGs are a type of hierarchical data structure where each node and stored object on the IPFS peer-to-peer network has a unique identifier that is a hash of the node’s contents. This unique identifier is referred to as the CID.
  • Content Discovery through Distributed Hash Tables (DHTs): DHTs are databases of keys and values that are split across all the peers on a distributed network. To locate content, you ask a peer on the network, which will return a DHT that tells you which peers are storing which blocks of content that make up the data object you’re requesting.
Content stored on IPFS can be accessed by using an IPFS gateway. Gateways are used to provide workarounds for applications that don’t natively support IPFS. An IPFS gateway can be local, private, or public, and uses the IPFS content ID to provide a URL link to the content for access to the stored content.

Native IPFS URLs

Applications that natively support IPFS content addressing can refer to content stored on IPFS in the format:
ipfs://{CID}/{optional path to resource}
This format doesn’t work for applications or tools that rely on HTTP, such as Curl or Wget. For these tools, you need to use an IPFS gateway.

IPFS Gateways

Content stored on IPFS can be accessed by using an IPFS gateway. Gateways are used to provide workarounds for applications that don’t natively support IPFS. An IPFS gateway can be local, private, or public, and uses the IPFS content ID to provide a URL link to the content for access to the stored content.
Filebase's native IPFS gateway is as follows: https://ipfs.filebase.io/ipfs/{CID}
All content stored on IPFS through Filebase can be accessed through the Filebase gateway with faster response times than accessing the content through any other gateway. This is because the Filebase gateway peers with our IPFS nodes. The Filebase gateway also peers with the IPFS gateways of other pinning services.

IPFS Subdomain Gateways

The format of a subdomain gateway is as follows:
Using a subdomain gateway as a drop-in replacement for a traditional gateway removes the need for there to be a CID version translation because there is a difference between opening an older CIDv0 resource and a CIDv1 resource. When a CIDv0 is accessed with a traditional gateway, the domain will return an HTTP 301 redirect to a subdomain, converting the CIDv0 CID to CIDv1 CID.
For example, opening a CIDv0 resource using a traditional gateway at:
returns a redirect to a CIDv1 representation at:
By using a subdomain gateway initially, there is no need for the conversion step to take place.

What is ‘pinning’ data with IPFS?

When data is stored in a Filebase bucket, Filebase’s native edge caching technology keeps that data cached at the network’s edge locations for quick access to that data again. Then, when the IPFS garbage collection storage limit is reached, which varies between the IPFS Desktop client and Brave, the cached data is cleared to make room for other data. This clearing of unpinned data is referred to as the IPFS garbage collection process.
If you want data to be cached for a long period of time you can pin the data. This will keep the data cached on the IPFS network indefinitely, since pinned data is skipped during the garbage collection process, allowing for faster data retrieval times.
Files uploaded to an IPFS on Filebase are automatically pinned to IPFS and stored with 3x replication across the Filebase infrastructure by default, at no extra cost to you. This means your data is accessible and reliable in the event of a disaster or outage, and won't be affected by the IPFS garbage collection process.
Pinning can be achieved through the Filebase Web Console or the Filebase S3-compatible API.
Pinning data is a unique feature of the IPFS network, and is not offered for buckets created on other networks.
There is no maximum file size for the IPFS network, but files larger than 5GB must be uploaded using the Filebase S3-compatible API to utilize Multipart Upload.
IPFS Buckets are ideal for:
  • Data sharing without managing authentication or permissions.
  • Hosting videos for streaming.
  • Hosting datasets for data analysis.
  • Hosting NFT Collections and storing associated NFT metadata and files.
  • Storing scripts, code, and content for a decentralized application.
For more information on IPFS, please refer to the IPFS Documentation.


Sia is an open source decentralized storage network that leverages blockchain technology to create a secure and redundant cloud storage platform.
Filebase works directly with Sia as a node operator, meaning Filebase manages all storage contracts on behalf of Filebase users. Every object that is uploaded to a Filebase Sia bucket is split into multiple pieces using Reed-Solomon erasure coding that features a 10 out of 30 algorithm. Objects are split into 30 pieces then geographically distributed to Sia host servers all around the world. Only 10 of the 30 pieces need to be available in order to process a download request. That means that 20 pieces of the object can be destroyed, offline, or otherwise unavailable, creating native redundancy and high availability.
Sia uses the Threefish algorithm for high performance and secure encryption. Threefish is especially hardened against related-key attacks and side-channel attacks.
The maximum individual object size that can be stored on Sia is 300GB.
All data uploaded to a Sia bucket is private by default.
Sia buckets are ideal for:
  • Increased data privacy.
For more information on Sia, please refer to the Sia Documentation.
If you have any questions, please join our Discord server, or send us an email at [email protected]
Copy link
On this page