Server side MPC

Here, we describe the bulk of the procedure. We have separated the witness generation for the zk-proof audit to the separate subsection, because it is logically independent.

Input

Server has a fixed public key PP in some cryptographic group, implemented over Fq\mathbb{F}_q. We will assume it is JubJub curve in what follows, but other options are available, depending on the used proof system.

The corresponding private key [p][p] is linearly shared.

On-chain, the sequence of encrypted votes is published; each vote is accompanied with a zk-proof of its validity predicate. The form of encryption is currently discussed, there are few options (encrypting all with El Gamal, using El Gamal for symmetric key and MiMC for the vote encryption, encrypting shares separately, or potentially adding supplementary advice simplifying data transfer to the main circuit) with different tradeoffs w.r.t. MPC work, witness size and client proving work, respectively. This is TBD, we continue this discussion in client side subsection.

The individual (decrypted) vote v consists of the following data:

v.user - an unique id of the sender in the global registry. v.from - a 254-bit key (from which this tx attempts to switch) v.to - a 254-bit key (to which this tx attempts to switch) v.votes[n] - a vote vector v.depth - a number in range 0..212810..2^{128}-1

The validity predicate (which needs to be proven client side), says that the vote decrypts to the aforementioned data, and, moreover

  1. Prover controls cryptographic identity of v.user.
  2. v.to == 0 or v.votes[n] == 0 (either tx is a key-switch, or a vote).
  3. (votes, user) pair satisfies the voting rules.
  4. Depth satisfies range constraints.
  5. Knowledge of such R, that(v.to == 0) or v.to == Hash(v.depth, v.from, R).
  6. (v.depth > 0) or (v.from == init_key[v.user]), where init_key is a set of public initial keys of the users.

The requirement 55 is such that no valid keyswitches should lead to the same v.to key from different states (together with large key size to prevent hash collisions).

The requirement 66 constrains the initial key if we are at depth 0.

We also refer to an order of a vote in a list of all votes as v.timestamp, and also add a field v.invalid = 0.

Shared decryption phase

We assume that all of the data is properly constrained (via public checking of validity predicate proof). After this phase (which depends on the encryption implementation) we assume that decrypted votes are all linearly shared between parties, and proceed to the main part of the protocol.

We put them in the shared array vote.

Main computation

Our main idea is to only use sorting + local operations for round optimization purposes. During the computation, we will mainly sort the main array vote (and pretend that operations done are in-place to simplify the notation, but we will need some of the computation trace later to construct the witness).

One should imagine the set of key-switches of a particular user as a set of trees (starting either from an initial key or from depth > 0). We will also require that every key-switch increases the depth by 1. Note, that because of the non-collision property it should not be possible to have the same v.to field with different v.from or v.depth. Therefore, this graph, indeed, will not have cycles.

0. Depth compression

In this round, we discard all votes with v.depth > total_vote_count. This is done to aggressively optimize amount of sorting rounds. Such votes had no chance to be valid anyways, and we believe an adversarial player can not reveal any useful information using this. In what follows, total amount of votes after this reduction is m.

1. Key compression

This round has two purposes: first, we make the key-size smaller to simplify radix sort, and, second, we ensure that there are no coinciding keys from different users. It is done as follows:

Every vote is duplicated, with an additional field v.ext set to 0 and 1 respectively, we refer to this array as vote2. For an extended vote, denote v.main_key() = v.ext*v.from + (1-v.ext)v.to

v in vote2 is sorted (in order of priority) by v.user > v.main_key().

Introduce an array jump of the same size, set jump[0] = 0 and for i from 1 to m-1:

jump[i] = 
(vote2[i].main_key() != vote2[i-1].main_key()) or (vote2[i].user != vote2[i-1].user)

Calculate the running sum q[i]=ij=0jump[i]\text{q[i]} = \underset{j=0}{\overset i \sum}\text{jump[i]}. This running sum is constant over the votes that are from the same user and have the coinciding main_key.

Now, we will set for each i: vote2[i].from = vote2[i].ext * q[i] + (1-vote2[i].ext) * vote2[i].from vote2[i].to = (1 - vote2[i].ext) * q[i] + vote2[i].ext * vote2[i].to

which sets the key by which it was sorted to the compressed version.

Now, we order the vote2 back by timestamp (using revealing sort), and then merge the pairs back:

  1. Reveal the vote2[2*i].ext field; if it is 1, do nothing, if it is 0, swap this element with vote2[2*i+1].
  2. Set vote[i].from = vote2[2*i].from; vote[i].to = vote[2*i+1].to.

This replaces each key with an element of size no more than log(m)\lceil \log(m) \rceil, in such a way that:

  1. Same keys of the same user are mapped in the same compressed versions.
  2. There are no other collisions; in particular same keys of different users are mapped to different compressed versions // this never happens normally because initial keys are different, but might be a result of a deliberate attack (an attacker could send a "hanging" vote of depth > 0 with arbitrary from key).

1. Initial sort

The votes are sorted (in order of priority) by v.from > v.timestamp (by which they were already sorted, so no additional action is required because sort is stable).

After this, we set vote.invalid = (vote[k-1].from == vote[k].from) for all k>0.

After this round, all votes that have a particular "from" key after the first attempt of using such "from" key will be invalidated. Note that if the first keyswitch turns out to be invalid later (due to the depth requirement), it will also invalidate further valid keyswitch.

The result of the round will be that the tree valid key-switches is now a set of linear paths ("bamboos").

2. The main cycle

Denote d=log(m)d = \lceil \log(m) \rceil. We will need dd rounds of sorting.

Denote v.odd(r) = v.depth&1<<r Denote v.sort_key(r) = v.odd(r) ? v.from : v.to

On r-th round:

Sort v in vote by (in order of priority) v.invalid > v.sort_key(r).

Now,

for i from 1 to m-1:
    if vote[i].odd(r):
        if (vote[i-1].depth + 1<<r == vote[i].depth) & (vote[i-1].to == vote[i].from) :
            vote[i-1].to = vote[i].to
            vote[i-1].votes = vote[i].votes
        vote[i].invalid = 1

This results in all votes of odd depth checking whether their left neighbor has the same to field as they have from, and whether depth actually increases by 1<<r. Then, we basically "merge" them, by setting the left neighbor's to field and the vote vector to the to field and the vote vector of the composition, and then invalidating the odd vote.

The votes of odd depth which do not find their left neighbor are invalidated without further actions.

Note: to counter the quirky behavior with "time travel", mentioned in the introduction, one needs to add to the check the part ...& (vote[i-1].timestamp < vote[i].timestamp). This is not hard to do if needed.

3. Final calculation

At this point, we just need to sum vote[i].votes * (1-vote[i].invalid) over the array to get voting results. In the next part, we will describe the slight modifications needed to produce an auditing zk-proof.

Complexity

Concrete estimates here are hard, and they probably will depend on the size of the vote vector; one methodology could be estimating total amount of radix sorts that we need to perform (assuming constant register size). Practically, this will be traded off against RAM size of the server machines, but it at least gives an estimate. We will also roughly estimate from above the size of user_id field as d=log(m)d = \lceil \log(m) \rceil.

  1. In compression phase, we need to sort an array of size 2m2m by a key of total size 254+d254+d, and then do one additional revealing sort of size 2m2m (which we will promptly ignore in this calculation).
  2. After the compression phase, the fields from and to also have a size dd. Initial sort, thus, is a sort of an array of size mm by a key of size dd.
  3. Main phase involves dd rounds in which we sort by d+1d+1-bit sized key (additional bit is for invalid bit flag).

Denoting by rr the radix sort register bitsize, we need to do (254+d)/r(254+d)/r radix sorts of size 2m2m, and d(d+2)/rd(d+2)/r radix sorts of size mm.

Round complexity, thus, is (8+n)(256+2d)d/r+d×local_ops(8+n)(256+2d)d/r + d \times \text{local\_ops}, but agressive optimization for rr is bounded by the exponential blowup in memory requirement.

results matching ""

    No results matching ""