I keep on having nagging thoughts about devs and bottlenecks in decentralization. I’d like to try to explore them here.
I consistently hear an argument that dev teams are not a threat to centralization. If a dev would try to push bad code (let’s define that as malicious and/or centralizing), then it would not be ratified. If a team pushed bad code, or someone with merge rights was compromised, then the public would reroute to other available software. The concept of having numerous clients to maximize the ability for the public to be able to accomplish this is a truly amazing accomplishment in Ethereum, by the way. If geth becomes evil, then there’s Parity. If it’s Parity, then there’s geth. If it’s both, then there are still other options. That’s the general argument, if I’m not mistaken.
Here are my reservations: Code bases are getting longer and longer, and more and more complex. How many lines in geth or Parity’s client? What about when sharding, Plasma, and PoS are all added? As the code bases grow, I would only assume that people with push rights increase. Not just that, as code bases get longer and more complex, I would assume more sub-specializing will occur. I wouldn’t be surprised to see a group in charge of handling sharding, another for networking, etc. I would also assume that people individually having a grasp of what is going on in the entire codebase will be fewer and farther in between, not that they are in any way common now.
I’d like to digress for a moment. I’m paraphrasing and modifying some ideas that I’ve seen in Taleb’s Skin in the Game. The practitioners and regulators of any field will always be a minority. As jobs and fields become more specialized, only a minority of people have the requisite knowledge and skills to properly contribute. This is necessary if we are to maximize progress: all fields must more or less be controlled by small groups of people. This isn’t tyranny. This is necessary if we would like to allow progress in highly specialized ways. Not everyone can understand how to get rockets to Mars, which means not everyone should be part of the process.
A part of this is delegation. We don’t necessarily have in depth medical/pharmaceutical knowledge, so we delegate our trust out. We may have opinions on specific issues, but we are probably clueless that numerous critical issues exist. Naturally, we delegate to trusted bodies, be they government bodies or otherwise.
Coding is no different. If anything, I think one of the underappreciated aspects of Vitalik is that he has a good grasp of the entire protocol, including any 2.0-related stuff, at least as far as I can tell. But that also means that as time progresses, less and less people will understand what’s going on in the code. (I’m not an expert in the world of Linux, but I wonder if Linus really knows what’s going on in the entire kernel. Perhaps. I also wonder if Ethereum can possibly stay as small as the Linux kernel.) This is not inherently a bad thing. It does, however, mean that people will have to delegate their trust to other bodies more and more. As this happens, I do believe that the chances of bad code go up.
I believe this also increases the angst about forking. One major theory is that a compromised protocol can be forked. At what point will non-technical, or even not sufficiently technical in the relevant matter fork? What kinds of echo chambers will be created supporting the various potential ecosystem? I believe that the power of defaults, UI, and social sciences, while being addressed, are undervalued in this subject. Especially the social sciences.
If we could fork a government, any current government, would we do it? That’s a question probably too hypothetical to be seriously answered. What would a forked government look like? But, if you would kindly bear with me for a minute, think about the massive infrastructure that would need to be maintained somehow, assistants, deputies, assistant deputies, liaisons to assistant deputies, etc. In addition, think about the power of branding/incumbents.
In something like the Linux kernel, as critical as it is, I don’t think the chances of bad code represent such a problem. A patch can be issued, and, in general, damage can be reverted. You can probably figure out where I’m going with this. Immutable, money, blah, blah, blah. There are more incentives to attack, and there is less capability to revert.
Bringing this back home, I think that the governance of contributions to codebases needs to be taken more seriously. Perhaps we’re not at a crisis point now, but as codebases increase, the threat surface will just get larger and larger. In my opinion, serious protocols for contributions to decentralized protocols, platforms, and programs need to be robust and in place before they are necessary.