Rational Ignorance and Scaling Governance

As the size of a group increases and the number and complexity of decisions facing the group increases it becomes increasingly difficult to coordinate effectively. There are many possible explanations for why this occurs but at a high level the problem can be summarized as follows:

  1. The burden of making effective decisions increases with the volume and complexity of decisions that need to be made.
  2. An Individual’s impact on the result of a decision decreases as the size of the group grows.

From an individual/microeconomic perspective this leads to rational ignorance, where the costs of actively evaluating all decisions, coming to an informed opinion, and ultimately participating in the governance process outweigh the expected benefits. In practice this often leads to low voter turn-out, and relatively careless voting based on emotional reactions and minimal research.

A naive solution would be to pay people to participate in an effort to make the expected value higher than the cost. Unfortunately, this may increase participation but does not actually incentivize researching issues carefully, instead pushes users to vote negligently.

A less naive approach is used by TCRs and other Schelling based voting games, where users are rewarded only if they end up in the majority and are punished if they end up in the minority. This creates an incentive to carefully evaluate options and predict what the group consensus is. Putting aside complications related to bribery, credible commitments, and other strategies that might influence the equilibria of such a system, the pattern creates an economic incentive for participants to vote for the option that they believe align best with the group rather than vote solely based on their personal preferences.

Combining a Schelling-based voting game with Sortition can allow a subset of participants to reach the same conclusion as a larger group since individual participants cannot assume which subset of the group has been chosen and therefore should assume the random sample is representative of the larger group. This is how the proposed Aragon Court mechanism enables disputes to be settled cheaply, by only requiring a small sample of jurors to resolve a dispute.

However, often there are decisions which would ideally be made by a sub-group of specialists on behalf of a larger community. In such a case a Schelling mechanism would only incentivize participants of the sub-group to pick preferences that align with the preferences of the other specialists within the sub-group, and not necessarily with the larger community.

A possible solution, proposed in the draft whitepaper, is to use the Court mechanism as a way to anchor community values and expectations on subsets of users who choose to participate in different aspects of network governance. For example to participate in governance over the Networks discretionary funds a participant would need to collateralize an agreement using ANT, the agreement would define a code of conduct that could prohibit obvious abuses of authority and set other expectations. In the event that the participants actions do not align with the agreement, anyone can create a dispute and put the offending parties collateral at risk. In exchange for taking on this risk the network would compensate participants for taking on the risk and opportunity cost of participating in the network. This essentially means we are able to pay people to govern the network, while ensuring even if there is only a small subset of the group participating they are generally held accountable to the network as a whole.

I think this solution is crucial, given that the individual voters trying to align with the majority will be extremely susceptible to falling into camps merely based on popularity rather than an actual** benefit or logical step forward** for the network.

I wanted to point to a phenomenon that has the potential to influence the equilibria and hopefully start a discussion about ways to avoid it.

The disincentive to end up in the minority could give rise to a market for off-chain proxy advisory services, similar to what we have seen in traditional markets. Proxy advisory firms provide institutional investors with research, data, and recommendations on management and shareholder proxy proposals that are voted on at a company’s annual meeting. ISS, the largest, reports 1,700 clients that manage an estimated $26 trillion in assets, given that smaller institutional investors effectively “outsource” their government-mandated fiduciary voting obligations for a low cost given economies of scale.

The cost of hiring a proxy advisory service < cost of in-house research / receiving a fine from the SEC. This has led to an aggregation of tremendous power among a small group of proxy advisory firms. An ISS recommendation in favor of a proposal bumps up the shareholder vote for that proposal by 15 percentage points on average.

As is obvious, these biased centralized services are exactly what we would want to avoid.

• Advisory firms have considerable conflicts of interest in how they are structured;
• The lack of transparency of the advisory firms’ analytical models makes it extremely difficult for investors or companies to determine why a proxy advisor has made certain determinations or to correct factual inaccuracies before a vote is held; and
• Concerns have mounted that inaccurate information is being transmitted to investors

It seems to me like The cost of hiring an off-chain proxy advisory service < cost of ending up in the minority (ie deposit is forfeited in TCR) or carefully evaluating and researching options for a large number of governance processes. As the proxy advisory service grows, the likeliness of their recommendation becoming the majority vote grows, thus creating an incentive for passive voters or voters that hold a large number of governance tokens to hire the service. This, in turn, centralizes the entire voting process once again. I think exploring the governance trends we’ve experienced in the last 20 years give some insight on patterns that could arise from any on-chain or decentralized large scale governance process.


The disincentive to end up in the minority could give rise to a market for off-chain proxy advisory services, similar to what we have seen in traditional markets.

Agree 100%, and this aligns with a lot of the discussion around delegation based TCR models. The expected reward is proportional to stake, so either accumulating a large stake or pooling stakes together is likely to be the best strategy.

As is obvious, these biased centralized services are exactly what we would want to avoid.

I hesitate to go this far, as there is a pretty significant difference in context that I think is important. In the context of a TCR its generally assumed that if the subjective quality of the list declines, users will find it less valuable and switch to a competing list that is of higher subjective quality. This essentially creates a highly competitive market for governance.

I think exploring the governance trends we’ve experienced in the last 20 years give some insight on patterns that could arise from any on-chain or decentralized large scale governance process.

Definitely. Though I also think we should be careful to not assume that because a process results in poor outcome in a different context that the result will be the same in a new context. For example voting systems that rely on paper ballets have certain properties that are well researched, but taking the same system making it electronic may not work nearly as well. Similarly, the centralization effect of proxy services may not result in the same issues when applied to decentralized governance, because the ability to exit/fork the system is such a significant check.

Ultimately, experimentation with these different approaches will allow us to validate our theories. One of the interesting ways to do that is by having competing systems that implement variations on the same process. For example a TCR which does not have minority block slash parameter compared to one that does, and see which produces a better result.

It seems to me that a posable solution to this might be to require the judges bond not be case specific. Instead the bond would be held for a grace period after a ruling and the same bond could be used to arbitrate over other cases as long as it is large enough. Doing this would allow you to create new incentives for judges to: evaluate cases they are not judging on, anonymously report any collusion or misconduct, and earn (some) bounty for doing so. Having this type of infraction be arbitrated by another jurisdiction would deter entire jurisdictions from colluding. In the case of large scale collision the reward for discovery becomes greater as the number of bonds increases.

@Abel I’d love for you to expand on this a bit, this is one of the reasons it seems like using a non-transferrable reputation for jurors makes sense as it provides a way to punish jurors for misbehavior across many cases (past and future).

By allowing judges to audit cases that they are not participating in and rewarding them for reporting on collusion or misconduct. You create a market for policing the courts.

This would be another posable failsafe against all judges colluding or using a third party arbiter to make decisions for them that are not evidence based.

Have you checked out the section of the new whitepaper on collateral and reputation?

The process there is more optimized for jurors possible misconduct while on a case, but in theory any party could raise a dispute and the collateral required of jurors could be sufficient incentive for people to review cases.

Yes I agree that it is important to make the jurors sign formal agreements of conduct. By allowing jurors to anonymously audit and report each other for breaches of this agreement and by paying for the time required to do those audits with the bonds of the offending jurors then there will be far less incentive to try and cheat the system by jurors. Loss of reputation may not be enough to prevent things like bribery through anonymous smart contracts (which could also be used to create fake bribes seemingly by the opposing party in order to have a case dismissed for bribery). Over time it seems inevitable to me that jurors will get to know each other and that the ones who are corruptible will find each other oneway or another. I also imagine that in some cases a 2x increase for an appeal would be cost prohibitive. In these cases if jurors are colluding it seems that there will be little posable recourse. With the method I’m suggesting claims of collusion could be made on a juror message board to notify “bounty hunters” of posable misconduct.