Coercion Resistant On-Chain Voting


#1

One of the biggest challenges with on-chain governance is the impact of credible smart-contract based bribes on blockchain voting systems. In general, such a contract would hold a credible commitment of funds, and have a public function that allows users to provide proof that they did the request action and receive payment in a trustless way.

This is a significant issue as it does not require direct or provable interaction between any party involved, and the parties do not need to trust each other. From the perspective of dishonest actors, this solves two common challenges related to taking/offering bribes: Both parties are unsure their counterparty will actually follow through, and the person being bribed is unsure if wether the person bribing them will use the fact that they took a bribe as leverage in the future.

In order to mitigate the issue of “trustless” bribery issues its possible to explore mechanisms which make it difficult or impossible for voters to prove to a smart-contract how they voted.

Such a mechanism would require the following two properties:

  1. Voters should be confident that their vote was included and tabulated correctly
  2. Voters must not be able to prove to a smart-contract how they voted

These properties are very difficult to achieve simultaneously, because if a user can publicly verify that their vote was included correctly, then they can generally provide proof to a smart contract how they voted.

Systems that rely on zero-knowledge proofs or ring signatures can provide voters with the option of privacy, but cannot prevent them from proving how they voted if they choose to. Such systems would offer some benefit, in that they force the person taking the bribe to make public information that was previously private, and those users could therefore be punished. However, in order to punish a bad actor they have to have some sort of deposit, and they can generally remove their deposit and only then accept a bribe, so assuming there is some withdrawal period for a stake based system, the punishment can be avoided by waiting until stake has been withdrawn before accepting the bribe.

In an ideal case, the voter would not be able to prove how they voted even by making private information public. One way to accomplish this is by using a Group Signature scheme, combined with a sMPC or a secure enclave acting as a trusted group manager. Projects like Keep and Enigma are actively working on infrastructure to support this type of computation in a secure way.

  1. Voters are are granted a group signing key
  2. Voters submit encrypted votes using the group managers public key, and validating with their group signature
  3. group manager decrypts, validates, and tabulates votes returning only the result.

This mechanism ensures that voters are unable to prove to a smart contract how they voted, unless the group manager is compromised. Users are not able to verify that their vote was included or that the tabulation was correct, but assuming they trust the security of the KEEP/Enigma based group manager then they can be reasonably confident that the operation occurred as expected.

Some possible issues:

  1. It is unclear what the cost overhead of using something like this or how it scales with group size.
  2. Since how people voted remains private, mechanism which reward/punish voters based on how they voted become challenging.

Discussion on Vote Buying/Dark DAOs
#2

Thanks for introducing that interesting problem. First-take on a potential fix: Allow voters to access their voting history only if they can also, by a separate transaction, alter the publicly-available version of that history in a private manner.

For instance: A bribes B to vote for A in an upcoming election. B privately votes for C, instead. B confirms that vote by receiving a report from a trusted service. The same service allows B to make the report read falsely, “B voted for A,” in a manner not detectable by a third party, such as A.

In that scenario, A will find it difficult to enforce bribery agreements, inhibiting such transactions. And, yet, B could still check to see that the vote was registered properly.


#3

Interesting approach, thank you for sharing!

For instance: A bribes B to vote for A in an upcoming election. B privately votes for C, instead. B confirms that vote by receiving a report from a trusted service. The same service allows B to make the report read falsely, “B voted for A,” in a manner not detectable by a third party, such as A.

I think this may be simplified if we simply say that user can send an encrypted request for either a valid or invalid receipt. From a third parties perspective if they inspect the request they cannot tell whether the user is requesting a valid receipt or an invalid receipt. In this way the proof is only valid for people who know whether the original request was for a valid or invalid proof.

However, I don’t think actually provides much if any benefit because it assumes that B trusts the service to provide a valid receipt when B requests it. Since the service can produce both valid and invalid receipts, B cannot use the receipt to verify whether the vote was included correctly. If B trusts the service then it has no need to verify its vote in the first place.

If we instead assume B does not trust the service and requires some sort of proof that the vote was included we run into the following challenge:

  1. If a non-interactive proof is used: the service cannot produce both a valid and invalid proof and therefore any third party that sees the proof will know how B voted.
  2. If an interactive-proof is used: the service can prove to directly B but not to others, however if B wants to accept a bribe from A then B can replay the interactive proof, providing additional proofs about B’s interactions with the service. By providing the original set of interactions, along with proofs about B’s interactions, this enables B to prove how they voted to A.

I don’t consider myself an expert on cryptographic proofs, so its possible that I am mistaken and there is a cryptographic technique that would allow the service to prove something to B while ensuring B has no way to subsequently prove that data to a third party.


#4

To recap:, you posit a dilemma: voter B cannot trust a vote-reporting service that could convincingly lie to briber A, on the one hand, and any vote-reporting service deserving of B’s trust must also reveal its secrets to A. In that view, the proposed solution “assumes that B trusts the service to provide a valid receipt when B requests it. Since the service can produce both valid and invalid receipts, B cannot use the receipt to verify whether the vote was included correctly.”

A telling concern. Perhaps an algorithm that first reports the true result and thereafter cycles between false and true answers would answer it, though. Consider this scenario:

B tries to cast a vote using his private key. He worries that it did not get through, as his Mongolian 'net connection is not very steady. B thus queries the vote reporting service, again using his private key, to double-check that his vote got counted correctly. (He trusts the vote reporting service because it uses open source code and carries other third party certifications.) Just for fun, to test out Aragon’s “Anti-Bribery Defense System,” B queries the vote reporting service again. This time, as he expected, it falsely reports that he voted otherwise than he actually did.

B queries the system again. It now reports the truth. Again? A falsehood.

Each report requires B to use his private key. There is no way for B or anyone else to tell from the vote reporting service’s records whether the algorithm is reporting for the first time, or while in a subsequent odd or even cycle. The algorithm neither stores nor needs that data. It need only know the prior report state.

B trusts the voting service report because he knows when he first queries it. If he were very absent minded, and in the habit of repeatedly querying the service, he might forget how he originally voted. In that event he would join A, the would-be briber, in being unable to discern from the trusted report what it initially and accurately reported about B’s vote.

In that scenario, A remains only a would-be briber. She never advances to an actual briber because she finds it impossible to trust the promises of voters, like B, she knows only as strongly pseudonymous virtual identities. B can promise to vote for A’s preferred choice, and can even show a voting report service record that A knows only B could produce. But A cannot tell from that report whether it is the first true one, or a subsequent false one. B knows, of course, because he knows when he first logged onto the voting service. But A has no reliable way of knowing if B did that, or if so, how many times he has run the reporting cycle.

As you note, an expert on cryptographic proofs should probably weigh in. And perhaps someday soon, Aragon will be able to assess proposed algorithms by putting them through lab testing and certification. At all events, it says a lot for the future of governance that you and your colleagues care so much about getting these small but crucial details right.


#5

If you are willing to trust a centralized service, none of this is difficult - a few services already exist. However there has to be a centralized service that everyone is willing to trust. If the service is decentralized, then the potential briber will know if it is an even or odd request.


#6

The algorithm need not and should not keep track of whether a request is odd or even. It should instead simply report the extant state and then flip the report in anticipation of the next request. The next inquiry, whether by A or B, would see no record of prior queries. And, yet, voter B would be able to discern, by his first query, whether the vote was recorded accurately.


#7

@jvluso

If the service is decentralized, then the potential briber will know if it is an even or odd request.

This is not necessarily true, if you use something like sMPC like what https://keep.network is working on.

@proftomwbell

B trusts the voting service report because he knows when he first queries it.

The problem is that since the service can produce both valid and invalid responses, B is only able to trust the response because B trusts that the code that is run by the service provider follows the rule that that first response is correct. If we assume that B can trust the code that is run by the service provider, then they are already assured that if their vote was received by the provider than it will be included. If the service provider is coded to simply look at votes which are submitted on-chain, the user would already have proof of inclusion and there is no additional benefit from being able to request a receipt.

On the other hand if the service provider provides a proof that the vote was included with a specific value, rather than simply a receipt, I think you end up with the problem I posed earlier.

My point in the original post was that even though the voter cannot directly verify their vote was included correctly, so long as they are able to trust that the code run by the service provider executed correctly then the mechanism seems like it would be fairly reasonable from the voters perspective.


#8

Agreed that if voters can trust that the code run by a service provider executed correctly, then they can trust the overall voting-counting process. Even without affording a way to double-check cast the value of cast votes, the system would still provide a great deal more confidence than voting systems people already apparently trust. And in answer to questions like, “Did my vote from Outer Mongolia get through the pipes?” a mere receipt would suffice.

So if voters can trust the code, and any receipts of votes cast, then they have as much proof of the system’s fidelity as a reasonable person would want. Furthermore, they may not be able to get any better proof without compromising the system’s privacy protections.

Suppose, though, that it were not impossible to allow a voter to see what value the system recorded for him/her/it (in truth, the value recorded for the associated private key) without making the same information available to a briber, coercer, or other third party. Such a mechanism would provide an additional level of confidence in the voting system. It might, if generalizable, prove useful in other contexts, too.

So with apologies for perhaps dwelling too long on the question, here’s a graphical explanation of the system proposed earlier:


#9

Relevant research posted on Hacking Distributed yesterday:

Let me know if you want me to open a new thread just to discuss this article, as there’s a lot to unpack here.


#10

There was a follow up by him as well: https://pdaian.com/blog/vote-buying-on-chain-governance-and-quadratic-plutocracy/

I think the idea of using trusted hardware on the voting side is significantly different from this topic (though definitely related) and will make a new thread for that.