Discussion on Vote Buying/Dark DAOs

bribery
voting

#1

Philip Daian recently published two articles exploring how trusted hardware can be used to facilitate vote buying in a particularly concerning way. If you haven’t read the articles yet, I highly recommend taking the time to do so in full.

http://hackingdistributed.com/2018/07/02/on-chain-vote-buying/

https://pdaian.com/blog/vote-buying-on-chain-governance-and-quadratic-plutocracy/

The general premise is that trusted hardware such as Intel’s SGX platform can be used to sell limited access to a private key. In such a setup someone can provably grant access to sign a vote without giving up ownership of the voting key itself. Such activity can happen opaquely, so it is not clear to any other party that such a transaction has occurred. This vulnerability is related to the key generation process, and is possible with any scheme that allows users to generate their own key in an untrusted environment.

He goes on to explore both simple and more complex attacks that exploit this process, with the most interesting being the idea of a “dark DAO” that acts as a vote buying cartel that is completely hidden. Such an organization would accumulate influence by having voters run malicious software and when activated would swing votes in order to create unexpected bad outcomes and profit from short-selling.

This has significant impacts on DAOs and other on-chain organizations that rely on voting (including the consensus layer, both PoW and PoS). This post is intended to discuss such attacks, their feasibility, and possible mitigations.

Impact

First, for such an attack to be successful users must be willing to trust their key (and associated assets) to custom wallet software that allows vote buying. Its unclear whether a significant portion of users would be willing to take such a risk, especially, as such an action could easily result in their assets being stolen by a malicious party.

At first glance it would seem that 1p1v and other identity constrained systems are more heavily impacted by vote buying attacks, because in 1p1v systems each voter has a small and identical influence, but voters may have drastically different stakes. In such a system voters who have low stakes will be willing to sell their votes relatively cheaply.

In stake-weighted or reputation-weighted voting, voters influence is more closely tied to their stake in the system and therefore would need to be compensated much higher in order to convince them to take an action that would cause them to loose or devalue their stake. PoW is weighted hash-power weighted voting, and can probably be thought of similarly to proof of stake, though the possibility to fork and remove stake may be a significant difference.

However, despite this, even stake weighted/reputation weighted systems seem vulnerable if voters assume that others are likely to participate in vote buying, as they may choose to participate as a means to cut their loses.

Possible Mitigations

The best option seems to require key generation in a trusted environment, however, such an approach could be challenging while retaining the permissions nature of a public blockchain application.

For some types of DAOs it may be possible to require voters to provide a written justification along with their votes that can be used to determine if a vote is legitimate or not. Votes which are bought may struggle to provide a convincing justification, particularly if they are being manipulated through a semi-automated process like a dark DAO. These justifications could be combined with a subjective oracle mechanism, like the Aragon Court, in order to slash voters in the event that their written justification appears fishy. Such a mechanism would increase the risk of participating significantly.

Forking and subjectivity may also be a possible solution, where in the event of an attack a fork can be created and then a market based mechanism used as a fork-choice rule. This may be easier to implement at the base layer, but in narrow cases may also be applicable at the application layer (eg Augur). This type of approach is unlikely to work well in cases where there are communal assets which cannot be forked.


#2

So the sMPC scheme you outlined here obfuscates votes cast by individual users. The remaining problem from Philip’s very interesting article seems to be users that are incentivized or coerced into running a custom attack wallet under their own private key allowing an attacker to vote on their behalf.

What would help is a mechanism for users to change their votes after selling them without the attacker knowing. It seems like we need a private channel between the user and the sMPC contract in order to facilitate this.

There could be an intermediate step in your sMPC scheme where voters request that the sMPC contract generate a voting key. Voters then sign their ballots with both the group key and this individual voting key. However, voters may at any time request a new voting key from the secret contract, which invalidates any prior voting keys.

In theory, this would give voters the ability to sell their vote, and then invalidate the voting key after selling it. The only problem is the attacker could observe the request transaction for a new key. This is where the need for a private channel between the user and the secret contract comes into play. If the user could privately communicate the request to change their vote to the secret contract, shackling with trusted hardware would be a waste of time.

How can we allow for private communication with the secret contract? Perhaps the user could sign a message with their private key and the sMPC group key and use a client for Keep or Enigma to submit this message directly to the secret contract. I’m not intimately familiar with the architecture for Keep or Enigma, so this assumes the user can use some sort of client to submit a message directly to these protocols.


#3

Hmm I think the challenge is that the issue is with the initial key generation process. The mechanism you propose depends on the user having a voting key (which may or may not be compromised), and a separate key which is assumed to be not compromised. If this second key is also generated by the user, then the attacker would simply require this key to be generated in a trusted environment so they have assurances about how it is used.

As far as I can tell there really is no way around the issue as long the user is able to generate their own key. We could require them to generate using trusted hardware that ensures that the key is revealed to their OS, but that doesnt seem particularly secure. We could also give each authenticated user a piece of hardware and use that in the key generation process but that would be very difficult to decentralize. It seems likely that this type of interaction is unavoidable if we want to have permissionless decentralized key generation.


#4

The attacker could certainly require the user to generate the key from a trusted environment. However, if the sMPC contract allows the user to request a new key at any time, what’s to stop the user from hopping on a separate device and invalidating the compromised key? If the user can do this via a secure channel from a separate device, the attacker could never be sure whether the voting key is still valid or not.


#5

How does the user authenticate?


#6

With their Ethereum private key. To clarify, there are three types of keys in the scheme I describe:

  1. User’s Key - Ethereum private key.
  2. Voting Key - User transmits a transaction to the secret contract signed with their Ethereum private key to request the generation of a voting key.
  3. Group Key - Public key provided by the group manager secret contract.

I assume #1 and #2 can both be compromised by the attacker, however the user can take compromised key #1 to a separate device and generate a request for a new voting key (#2) at any time.

The attacker could also do the same at any time since they have access to key #1, so you ultimately could end up with both the attacker and the user continuously submitting requests for new keys in a race to be the last one in before the vote ends.

I’d say this makes victory for the attacker much less certain, especially if the user is using conventional hardware which is much faster than the trusted hardware the attacker is confined to.


#7

The attacker could also do the same at any time since they have access to key #1

I don’t think this is the case, if the key is generated in a trusted execution environment it can be hidden from both the attacker and the “owner”


#8

That’s a great point, hadn’t considered key generation from the TEE. As you mentioned then it seems the only solution seems to be to require trusted hardware so that the wallet software the user runs and generates the key with is verifiably benign. You mentioned this doesn’t seem secure. What would the issue with this approach be?


#9

As you mentioned then it seems the only solution seems to be to require trusted hardware so that the wallet software the user runs and generates the key with is verifiably benign.

I’m not sure how I would define verifiably benign, but I think the best case here is that we can ensure that the software runs in such a way that the key is exposed outside of the TEE (e.g. to the operating system), and assuming we don’t have some weird TEE running within a TEE, this guarantees that the user has direct control over their key, however, it seems we would then be dependent on a key which is “hot” because it has been exposed to the OS of a machine connected to the internet.


#10

You could require all users to vote via verified wallet software running remotely on SGX. They describe this in the article as one way to form a dark dao using SGX with remote attestation. In this case, you could use it for good by requiring an attestation that the user is running a “benign” wallet on SGX. This would prevent the attacker from controlling the voting wallet through scripting or a malicious DAO. The user could still try to temporarily lend their tokens to an attacker’s SGX wallet, however they could have no guarantee the attacker would pay them or even return their tokens. Removing the possibility for automation via SGX would seem to cripple the incentive structure for the voter to sell their vote in the first place.


#11

Agree, and this is what I was trying to suggest here:

I’m not sure how I would define verifiably benign, but I think the best case here is that we can ensure that the software runs in such a way that the key is exposed outside of the TEE (e.g. to the operating system), and assuming we don’t have some weird TEE running within a TEE, this guarantees that the user has direct control over their key, however, it seems we would then be dependent on a key which is “hot” because it has been exposed to the OS of a machine connected to the internet.

However, I think my assumption that it would necessarily become a hot wallet may be inaccurate, assuming the TEEs attestations are provided in a non-interactive manner.

The other main downside is that users require some sort of specialized hardware, and the voting administration (smart contract) has to validate that the specialized hardware is trusted. Because voters are all using the same (or one of a limited number of whitelisted) hardware, any hardware vulnerability could have significant consequences.

But it does seem like requiring TEE attestations that wallets software is benign could be a reasonable mitigation in situations where Vote Buying is a significant concern. :smiley:


#12

Enigma just published an article on the topic, actually mentioning @lkngtn! https://blog.enigma.co/dark-daos-and-the-complexity-of-secret-voting-fc3b4fe4d666

Their proposed solution is requiring the code used to sign a vote can only be run in a TEE that exposes the voting key to the host (therefore making the key accessible to the briber who could use it to transfer the tokens). It is an interesting idea, but I can’t see how the fact that the vote was signed in that way could be verified.

Also if ‘nested TEE’ as discussed above are possible, it would break the proposed scheme as the briber could wrap the voting machine into another machine (‘wrapper machine’) that gets the encrypted private key, passes it to the voting machine, which will expose the unencrypted key to the ‘wrapper machine’, which verifiably doesn’t expose it to the actual host.


#13

In practice, would it be sufficient to confirm each vote using a mobile app? It seems that in common use cases, important votes are infrequent enough that using 2-factor would prevent a massive amount of voting to occur without the user being aware?

In addition, the two factor system would serve as a confirmation. If there are concerns that it’ll feel slow, it could be possible to just display all votes and batch confirm on mobile.


#14

I’m not sure about other vendors, but it appears that Intel basically has their SGX TEE sign a message using a private key within the enclave provisioned by Intel when the device is created. The contract would need to be able to submit this to Intel’s authentication service to verify the validity of the TEE. The following has details:

and a code sample is here:

Based on this it seems like you could require votes to be preceded by a successful validation via Intel that the voter is within an SGX enclave.

How would you nest one TEE within another given the tamper protections? Wouldn’t you need to tamper with the outer TEE in order to nest the inner TEE within it?