What Is The Difference Between Each Storage Network?
This page details the differences between the currently supported networks utilized by Filebase.
Filebase currently utilizes the IPFS, Sia, Skynet, and Storj decentralized networks. Each of these networks encrypts all data stored on them with erasure coding technology and has native geo-redundancy. Each network, however, is better suited for certain use cases more than others based on their underlying frameworks and technologies.

IPFS

InterPlanetary File System, or IPFS, is a distributed and decentralized storage network for storing and accessing files, websites, data, and applications. IPFS uses peer-to-peer network technology to connect a series of nodes located across the world that make up the IPFS network.
There are three pieces of IPFS that are unique to IPFS:
  • Unique Data Identification via Content Addressing: Data stored on IPFS is located through it’s content address rather than it’s physical location. When data is stored on IPFS, it is stored in a series of encrypted pieces, with each piece having its own unique content identifier, or hash. This unique identifier is referred to as the CID. This hash serves as an identifier and links the piece to all the other pieces of that data.
  • Content Linking via Directed Acyclic Graphics (DAGs): DAGs are a type of hierarchical data structure where each node and stored object on the IPFS peer-to-peer network has a unique identifier that is a hash of the node’s contents. This unique identifier is referred to as the CID.
  • Content Discovery through Distributed Hash Tables (DHTs): DHTs are databases of keys and values that are split across all the peers on a distributed network. To locate content, you ask a peer on the network, which will return a DHT that tells you which peers are storing which blocks of content that make up the data object you’re requesting.
Content stored on IPFS can be accessed by using an IPFS gateway. Gateways are used to provide workarounds for applications that don’t natively support IPFS. An IPFS gateway can be local, private, or public, and uses the IPFS content ID to provide a URL link to the content for access to the stored content.

Native IPFS URLs

Applications that natively support IPFS content addressing can refer to content stored on IPFS in the format:
ipfs://{CID}/{optional path to resource}
This format doesn’t work for applications or tools that rely on HTTP, such as Curl or Wget. For these tools, you need to use an IPFS gateway.

IPFS Gateways

Content stored on IPFS can be accessed by using an IPFS gateway. Gateways are used to provide workarounds for applications that don’t natively support IPFS. An IPFS gateway can be local, private, or public, and uses the IPFS content ID to provide a URL link to the content for access to the stored content.
Filebase's native IPFS gateway is as follows: https://ipfs.filebase.io/ipfs/{CID}
All content stored on IPFS through Filebase can be accessed through the Filebase gateway with faster response times than accessing the content through any other gateway. This is because the Filebase gateway is peered with our IPFS nodes. The Filebase gateway is also peered with the IPFS gateways of other pinning services.

IPFS Subdomain Gateways

The format of a subdomain gateway is as follows:
https://{CID}.ipfs.dweb.link
Using a subdomain gateway as a drop-in replacement for a traditional gateway removes the need for there to be a CID version translation because there is a difference between opening an older CIDv0 resource and a CIDv1 resource. When a CIDv0 is accessed with a traditional gateway, the domain will return an HTTP 301 redirect to a subdomain, converting the CIDv0 CID to CIDv1 CID.
For example, opening a CIDv0 resource using a traditional gateway at:
https://dweb.link/ipfs/QmbWqxBEKC3P8tqsKc98xmWNzrzDtRLMiMPL8wBuTGsMnR
returns a redirect to a CIDv1 representation at:
https://bafkreidgvpkjawlxz6sffxzwgooowe5yt7i6wsyg236mfoks77nywkptdq.ipfs.dweb.link/
By using a subdomain gateway initially, there is no need for the conversion step to take place.
For more information on IPFS CIDs and fetching IPFS links, see our guide here.

What is ‘pinning’ data with IPFS?

When data is stored in a Filebase bucket, Filebase’s native edge caching technology keeps that data cached at the network’s edge locations for quick access to that data again. Then, when the IPFS garbage collection storage limit is reached, which varies between the IPFS Desktop client and Brave, the cached data is cleared to make room for other data. This clearing of unpinned data is referred to as the IPFS garbage collection process.
If you want data to be cached for a long period of time you can pin the data. This will keep the data cached on the IPFS network indefinitely, since pinned data is skipped during the garbage collection process, allowing for faster data retrieval times.
Files uploaded to an IPFS on Filebase are automatically pinned to IPFS and stored with 3x replication across the Filebase infrastructure by default, at no extra cost to you. This means your data is accessible and reliable in the event of a disaster or outage, and won't be affected by the IPFS garbage collection process.
Pinning can be achieved through the Filebase Web Console or the Filebase S3-compatible API.
Pinning data is a unique feature of the IPFS network, and is not offered for buckets created on other networks.
Since IPFS data is pinned on the Sia network, the maximum individual object size for objects uploaded to IPFS is 300GB.
IPFS Buckets are ideal for:
  • Hosting videos for streaming or live-streaming.
  • Hosting datasets for data analysis.
  • Hosting NFT Collections and storing associated NFT metadata and files.
  • Storing scripts, code and content for a decentralized application.
For more information on IPFS, please refer to the IPFS Documentation.

Sia

Sia is an open source decentralized storage network that leverages blockchain technology to create a secure and redundant cloud storage platform.
Filebase works directly with Sia as a node operator, meaning Filebase manages all storage contracts on behalf of Filebase users. Every object that is uploaded to a Filebase Sia bucket is split into multiple pieces using Reed-Solomon erasure coding that features a 10 out of 30 algorithm. Objects are split into 30 pieces then geographically distributed to Sia host servers all around the world. Only 10 of the 30 pieces need to be available in order to process a download request. That means that 20 pieces of the object can be destroyed, offline, or otherwise unavailable, creating native redundancy and high availability.
Sia uses the Threefish algorithm for high performance and secure encryption. Threefish is especially hardened against related-key attacks and side-channel attacks.
The maximum individual object size that can be stored on Sia is 300GB.
All data uploaded to a Sia bucket is private by default.
Sia buckets are ideal for:
  • Increased data privacy.
For more information on Sia, please refer to the Sia Documentation.

Skynet

Skynet is a decentralized storage platform that leverages the Sia network. This technology is built for high availability, scalability, and easy file sharing.
All data uploaded to a Skynet bucket is by default publicly accessible.
Every file uploaded to Skynet returns what is called a Skylink. A Skylink is a unique content identifier, similar to a hash or UUID.
The maximum individual object size that can be stored on Skynet is 300GB.
Skynet buckets are ideal for:
  • Easy file sharing.
  • High Scalability.
  • High Availability.
For more information on Skynet, please refer to the Skynet Documentation.

Storj

Storj is an open source decentralized cloud storage network. Filebase integrates natively with the Storj, allowing for a simple and affordable way to upload your data onto the Storj network.
Objects uploaded to Storj are split up into 80 and distributed across thousands of diverse nodes. These nodes are located on a variety of service providers in nearly 100 countries. Retrieving an object only requires 29 of these 80 pieces.
Storj uses AES-256-GCM symmetric encryption on all objects stored on the network.
The maximum individual object size that can be stored on Storj is 5TB.
Storj buckets are ideal for:
  • Better for large datasets and large objects.
  • Increased data reliability.
  • Increased data stability.
For more information on Storj, please refer to the Storj Documentation.
If you have any questions, please join our Discord server, or send us an email at [email protected]