Weighing in on some thoughts for IPFS clusters and authentication.
As discussed in a call today with @Schwartz10 and @stellarmagnet, I see two primary longer-term strategies for hosting IPFS data in the Network:
Association backed, potentially pinned by all Flock teams and or service providers: host mission critical data for the Network, e.g. Flock apps, radspec and token registries, etc.
- Makes the most sense for collaborative clusters, where each Flock team may be expected to run a replication node, and pinning permissions could be concentrated primarily in the Assocation or delegated to a small technical group
- Already begun this process with AGP28, where the Aragon client’s releases are becoming more decentralized from A1’s control (and ideally would be kept pinned in other servers than just A1’s)
Per-app / per-flock backed data stores: app-specific data, e.g. TPS’ Projects app’s markdown files, Pando-backed repos
- Other teams could altruistically replicate these data sets on goodwill / reciprocation
- Service providers could provide replication nodes for a fee
In the long future, I would hope that each organization eventually begins to run its own infrastructure (or rent it via service providers) to pin important information related to its operations (similar to how basically every 5+ person organization in modern countries will either have self-hosting or paid cloud-backed solutions).
However, in both the short and long term, I get the impression there would be considerable value if the Association provided infrastructure for organizations to pin a reasonable amount of storage (e.g. 10-100mb) for free. After this range, there could be paid service tiers provided by either the Association or other service providers.
This limited amount would ideally be large enough for small users to frictionlessly upload their organization profiles, cross-org customizations (layouts, local labels, etc), and app-specific data blobs.
Potential per-organization authentication strategy
I assume we are able to create a thin authentication layer on top of IPFS clusters, either through a reverse proxy or some other means of proxying authentication requests, that is able to track the amount of data pinned by each author. If not, some more research will need to be done on creating this type of authentication “shell” around the IPFS API.
Extending @mcormier’s message signature strategy, we could augment it to include some organization-based checks:
- Hardcode a role into the Aragon infrastructure, e.g.
DATA_UPLOAD_ROLE, likely in the
- Allow organizations to assign certain apps or accounts a permission granting
- Send requests to pin either through an app or EOA (see below)
- The authentication layer would check if the requester has permission to upload data on behalf of an organization (
Kernel.hasPermission(<requester>, <kernel>, DATA_UPLOAD_ROLE))
Step 3 will differ based on the requester, as apps would not be able to sign messages and their contracts (obviously) cannot make HTTP calls:
- If an EOA requests the pin, the user would only be required to provide the organization’s address and a signature of the CID as HTTP headers for a pinning request
- If an app is assigned the permission:
- If the contract can immediately invoke the action (see related issue), allow an EOA to send a pin request through HTTP, but require a “final forwarder” to be part of the request headers
- The authentication layer needs toalso check that the EOA is indeed able to forward to the “final forwarder” (see
@aragon/wrapper), and that the app has the correct permissions
- If not:
- First, we assume a Network-wide “fake” contract (that is never deployed), e.g.
"0xFFFF..FFF" - "DATA_UPLOAD_ROLE"
- The Aragon client could create EVMScripts to this “fake” contract address with ABI-encoded IPFS hashes as its calldata
- An EOA sends an HTTP request with “proof” that an app requested the pin, supplying enough information that allows the server to verify the EVMScript encoded in the app that calls this “fake” contract, and that the app has the correct permissions.
This last case may be hard to generalize (as the “proof” could be hard to generate; I see no easy way to standardize an interface for testing if a forwarder would actually execute an action), so the alternative is to actually deploy a contract that just emits an event with
contentHash, and have a server watch that contract’s events (a “watch tower”).
This contract could alternatively have a mapping of
keccak256(kernel, contentHash) -> bool (a bit more expensive than an event) to remove the “watch tower” and allow users to use the HTTP flow (as this contract would provide direct proof).
If the organization encodes an IPFS hash on-chain (either in the
Kernel or a very simple app), it could also automatically call this contract for each storage update.