Just enough decentralization

I wanted to start this thread to discuss what and how much decentralization is “enough”. Often when working Aragon we are forced balance UX with Decentralization.

For example, in order to not need to rely on a centralized service, Autark’s Discussion module requires an on-chain transaction for each comment. Similarly the sync process for loading apps takes a long time, especially if on the first load because we do not currently support centralized caching.

My personal opinion is that Decentralization should be thought of as a means to an end, not an end in and of itself. I think the end is that organizations can be sovereign and continue to operate even if infrastructure goes down. Adding a centralized caching layer can improve UX and if the infrastructure goes down, users can use Aragon the old fashion way. If discussions relies on a centralized service, but apps are designed such that if that service is unavailable the discussion feature is simply disabled, then the organization is still functional.

What is the level of service that is important to maintain in a “fully” decentralized way?

What are some ways we can improve the product by relaxing some of these assumptions?

Outside of core functionality, is it reasonable to adopt “easy but centralized solutions” with the intention to replace them with more decentralized alternatives when they become “ready for the masses”?


A few articles that I think are relevant and add to this discussion:

I think that’s risky since we could easily become comfortable with the centralization trade-offs and stop pursuing the fully decentralized solution. On the other hand, a different angle (or just a way to phrase it) is the following:

In optimistic circumstances (such as no Internet firewall or centralized infra provider like Infura being malicious), the UX may equal that of a Web2 product.
In adversarial scenarios, the UX may degrade to support the core functionality of the product being fully decentralized and censorship-resistant.



I think because users of the web client are already relying on many centralized points of failure (the aragon.org servers, DNS, Certificate Authorities, etc) that adding one more to improve the loading speed would not be controversial. What I do think is important is that there is a “fully sovereign” option that I think the desktop client should be geared towards: the ability to eliminate all “trusted third party” dependencies by using one’s own IPFS and Ethereum nodes to replace all centralized components.


Something I prototyped yesterday.

It’s a small (in-memory for now) caching server that works for any organisation. Replace the address in the URL with any org address and it will sync the entire state of the organisation just like the client would.

This also means that the first request will have no state in the cache, and then it might take a while for the cache to be fully up to date.

The time it takes depends on how long it would take the client to catch up from fresh, so orgs like Genesis will take a lot longer than the newest org, which I used in the example above.

There’s a minor list of improvements I want to make to it, but it serves as a nice PoC. I am also of the opinion that integrating this in to the Aragon client would be fairly easy, with the cached state serving as a snapshot that the client can sync from.

Another thing I’ve thought about is serving this cached state, but verifying it in the background. This is done by running the full sync as the client normally would and checking the state along the way. This sort of enables a “trust but verify” paradigm. I also think it is very important that using the cache would be opt-in.

I will try to do a small opt-in PoC integration in the client in the next few days.

Edit: This is the AGP10 org. It’s still syncing as of writing this.

Edit 2: I recommend installing a JSON viewer for the best experience when opening these links :slight_smile:


This is generally my position as well, though I think one important thing to consider is what aspects are comfortable losing as service “degrades” and what functionality do we consider core?

Contextual discussions for example is something I would probably be okay with losing when operating in fully decentralized context (though would prefer not to…), Its one of those situations where I think it is okay to start with the easy centralized solution, and then as decentralized solutions become more mature move in that direction.

Another is IPFS pinning, I think its great to retrieve IPFS content from ipfs, but I would be comfortable with a solution which propagates the content to IPFS and hosts it on a centralized server, and then if the context isn’t available from the centralized server it can try and fetch from IPFS. As IPFS becomes more reliable, we can be less reliant on the centralized service – but in the mean time we can guarantee that content like “global” custom labels for orgs would be fetchable.

Also agree with this sentiment, if users prefer to access via aragon.org’s DNS they should also be comfortable with being served a cached state. I think its important that it remains easy to “host it yourself”, but there may be services like push notifications that are more difficult to host yourself.

I think this is fine–I don’t think the host-it-yourself experience necessarilly has to have feature parity with the centralized version, so long as “core” functionality is available.

This is rad! :exploding_head:

I think you should probably cross-post to Centralised caching solutions and continue the conversation there.


This is exactly the solution that I’d love to see. BTW, the caching server is super exciting!! How are you reducing the state? Running aJS on the server for each org?

1 Like

Agree, this is very similar to how the concept of mirrors works for Linux distros, where you just have a group of centralized servers hosting content. In this case there could be a couple more or less official mirrors, and then IPFS as a fallback.

1 Like

Yes, exactly, just running aJS :slight_smile: . I’ve also forked the client and added caching to it on a branch. It’s opt-in (resides in the Network tab of global preferences) but lacks verification right now.

  1. Clone https://github.com/feathercache/aragon
  2. Switch to branch feather-cache
  3. npm i
  4. npm run start:mainnet

Orgs you visit will be added to the caching server and it will start caching the state. It might take a while for some orgs.

For testing, foolish.aragonid.eth is fully cached and loads near-instantly.

The core integration in the client was just 12 lines of code, the remainder are for adding the settings :blush:


This is kind of how it works right now, the client uses the AA’s IPFS gateway by default where we make sure all Aragon app content is always pinned (via the deployments repo), so it is effectively a centralized server already.

That’s fucking amazing, good job!


1 Like

This may work for the “core” apps but doesn’t seem to work particularly well for wider developer community. It also doesn’t solve the issue of pinning user created content from within the apps (like you would need for an organization to save a “canonical” set of custom labels for their organization).

Its possible that having a pinning server used by default is sufficient (no need to have a centralized mirror) but there definitely seems to be a gap here.

What concerns me about this thread, in general, is that some of it relates to the practicality of decentralization. What I mean by that is, some of it doesn’t argue that we “shouldn’t” decentralize, but only that it’s not practical (because on-chain transactions are slow and costly). That argument seems to say: “it should be done but there are practical tradeoffs”. In other words, “It should be done”.

In that case, are we just discussing how to comprise given that the architecture doesn’t support what we want to do? If that’s what we are discussing, as these solutions scale, isn’t it predictable that the problem will get worse (I’m not sure, because I’m not a dev, so this is a real question, not a presumption)? It seems to me, as a non-technical person, that if we are making compromises now in Year 2, when the systems are small in scale, that by year 4, we will continue asking the question about more and more things, as the transactions get slower and heavier. The concern here is that perhaps we are beginning a downward trend of continuously asking “what corners can we cut”? Perhaps, in fact, because of the limits of the infrastructure, we are starting a trend towards centralization.

That direction is of concern, even if the current compromises are acceptable. I think we have all experienced personal and business situations where one compromise led to another and after a while… we didn’t even notice bad a relationship/job/financial deal had gotten. So, while the debate is useful, we might be asking the wrong question.

Perhaps a better question would be “Are we working on the correct architectural infrastructure given the fact that we wish we could be more decentralized?”

1 Like

Thank you so much for your post! I really really appreciate your perspective and generally agree with everything you’re saying.

When it comes to the spectrum of centralized <–> decentralized, I think that people should have a choice. Some applications are only viable if there is higher throughput and lower fees. Also, some systems will work well in a “trust, but verify” model where a centralized operator or set of operators performs a service, but community members can check their work, and if they are found to be cheating punish them accordingly. There’s a lot of different designs that are possible, and one size does not fit all.

That being said, personally I’m here because I want decentralized verifiable data and computation. For me that’s kind of essential. As you mentioned, we could always make more and more compromises, but… we’re in a market where there are now over a dozen “next generation smart contract blockchains” that are making these trade offs to compete with Ethereum. At the end of the day, it looks more like a race to the bottom than a unique value prop. If we really want to fight for freedom, I think that fighting for decentralization is a core part of that.

Again, here we are talking about “higher throughput and lower fees” as if it’s an inevitable consequence of decentralization. Why is that inevitable?

Chris Burniske and Ryan Selkis talk about it in depth in this episode of Unqualified Opinions. Essentially, cryptoeconomic mechanisms need ways to incentivize people to do stuff.

Currently blockchains like Bitcoin and Ethereum use block rewards and transaction fees incentivize people to secure the network. Block rewards diminish over time, compensating the risk of being an early adopters with larger rewards. Diminishing block rewards implies that the goal for these types of networks to be sustained by transaction fees. Since space in a block is a scarce resource, lots of transactions result in high transaction fees to be included in a block. High transaction fees incentivize people to work to secure the network. Securing the network has hard capital requirements. Even though miners get paid in the network’s native token, they also have to cash out that token to pay for operating expenses. This adds liquidity to the market and distributes the token among more market participants. As a network evolves, if it is successful, there will be more transactions and that will drive up transaction fees. This results in layer 1 becoming an expensive finality/settlement layer while also incentivizing applications to move towards layer 2 and/or sharding to reduce fees. More transactions lead to more fees which leads to incentives to move those transactions “up the chain” and only settle them on layer 1 occasionally. This is kind of a natural evolution of cryptoeconomics in blockchain networks.

The other way to incentivize people is via inflation. In proof of work systems this was done through block rewards, but those block rewards diminish over time. In proof of stake systems, however, many (not all) blockchains still create block rewards. Those block rewards are not distributed to miners who have hard physical costs to secure the network. Block rewards are distributed to those who stake on the network. This redistributes tokens to those who already have tokens. Those stakers have no hard costs of capital. In fact, the more tokens they have staked the more tokens they will receive from inflationary block rewards. This incentivizes people to stake as many of their tokens as possible, resulting in a “rich get richer” type scenario. This is not the only problem. The other problem, is that if there are very low transaction fees and/or very large blocks, then the only incentive is inflation. As mentioned, inflation does not increase the value of the network, but merely redistributes the capital among current stakeholders. If the only reason to stake to the network is to get inflationary block rewards, but those rewards don’t actually increase the value of the network, and the network is virtually free for users because of low/0 transaction fees… then what is the value of the token? The value of the token then becomes worthless. No one needs it for anything. Holding it just keeps your piece of the pie, but no one needs a piece of the pie to use the network anyways, so the pie is worth nothing. Many proof of stake systems have additional features baked into their tokens that add value in other ways, but this is the dichotomy between scalability and cryptoeconomic security.

EDIT: Thinking about it a little more, I don’t think this directly answers your question in the context of this thread. In decentralized systems we need to verify that data is available, authentic, and received within an acceptable window of time. In order to do this we need to incentivize people to participate in that network, verifying and relaying data. This data can be stuff on IPFS, transactions sent to a DAO, or anything else on a network. The two most popular ways to do this in a trustless, decentralized, and cryptoeconomic way are Proof of Work and Proof of Stake systems as described above. There is also “proof of space-time” which IPFS uses, and many other flavors that use “proofs of X.” At the end of the day though, people have to be incentivized to do stuff, and they have to prove that they did that stuff to get rewards or get punished. This makes the network run. How to do this is an open design question. One size does not fit all. If the network is free no one is incentivized to contribute, but if the network is too costly no one will want to use it.

1 Like