Home Blog Games About
Jun 06, 2021

▦ 6 → Interplanetary Filesystems

by: Shahruz Shaukat

This post is mostly about a specific pattern for updating data on IPFS and how that could relate to a "Relational OS".

My overview is simple here, it's worth researching more if how it's implemented seems interesting.


If smart contracts on blockchains are the Web3 version of APIs and databases, then IPFS is the Web3 version of S3 (the most widely used file storage infrastructure).

It's expensive to store data on the blockchain directly because of it's architecture. Every computer running a blockchain node needs to store a copy of the data locally, and needs to download every transaction as it runs. If those transactions contained the data for static media files, the network would have congestion problems. So while it's possible to store as much data as you want on the blockchain directly, it becomes prohibitively expensive (in gas fees) very quickly.

IPFS deals with a separate problem (q: how do you decentralize file storage efficiently? a: break every uploaded file into parts, fingerprint each part with a cryptographic hash, and compress storage needs by having the file reference those parts' hashes so data isn't stored redundantly in the same place, and spread these around lots of computers using some algorithms that are beyond my understanding).

IPFS hashes are 46 characters long, and can be further compressed into 32 bytes, which is very cheap to store on the blockchain.

So for example, a typical flow for a service that mints an NFT would involve:

Now any Web3 client app or website can query the blockchain directly and recieve the IPFS link to the metadata for the NFT. It can then retrieve that metadata with a standard web request.


Because IPFS fingerprints each uploaded file, there's no way to update a file and have it exist at the same URL.

But like the NFT example shows, IPFS doesn't just store media files, it can store JSON too.

That means you could design a rudimentary but useful pattern for updates: simply provide a "parent" hash if one exists.

So the metadata JSON from the example would look like this if we wanted to update the title and description while keeping track of its history:

  {
    "title": "My Updated Image",
    "description": "Check it out!!!",
    "image": "ipfs://QmAAAwwH1kffUqth77z1iDqKin14wzrCCWkhA2EuoDnY7a",
    "createdBy": "0xBb167bCe93F2e1Db5aAe834702C8BDAEaB5e9831",
    "createdAt": 1622936610,

    "parent": "ipfs://QmBBBwwH1kffUqth77z1iDqKin14wzrCCWkhA2EuoDnY7a"
  }

The example NFT service would then upload this new metadata JSON to IPFS, and update the stored hash to the new one. Clients would be able to identify the "parent" attribute and add in UI to see a history of that file. This can also extend into "forking" things. A new NFT could be minted by a different user containing a reference to a parent from another user.


Using JSON on IPFS could also be a way to handle things like comments or discussions. A comment thread on a post could be described by this JSON:

  [
    {
      "text": "Cool!",
      "createdBy": "0xaA067bCe93F2e1Db5aAe834702C8BDAEaB5e2353",
      "createdAt": 1622936610
    },
    {
      "text": "Wow!",
      "createdBy": "0xCC999bCe93F2e1Db5aAe834702C8BDAEaB5e1234",
      "createdAt": 1622936608
    },
  ]

When a user wants to post a new comment, their client would just update this comments JSON locally with the user's new comment inside the array, upload it to IPFS, then run a transaction to update the stored IPFS hash on-chain for comments.

There's at least one big issue with this approach: security without centralization. If a client has to pass along past comments along with any new one, there's a possibility that someone could write a malicious client that gives you the ability to edit past comments.

Some possible workarounds that I haven't fully though through:


🕊 Where to next?

▦ 3   →   Markdown

▦ 7   →   Swollen Appendices

▦ 10   →   Governance


✸ Tabs roll call