What is Jubjub?

Zcash’s next major protocol upgrade, codenamed Sapling, will feature a number of improvements to the performance, security and usability of our shielded transactions. We’ve talked about some of these improvements, such as a new elliptic curve called BLS12-381 which will lead to direct performance and security gains in Zcash. On top of this cryptographic foundation, we’re building new primitives and tools to improve performance and usability, starting with something called Jubjub.

Smaller Circuits

Currently, Zcash relies on the SHA256 compression function as a collision resistant hash function (for the accumulator), for a MAC scheme to prevent malleability, for PRFs, and for commitment schemes. SHA256 consists mostly of boolean operations, so it is not efficient to evaluate inside of a zk-SNARK circuit, which is an arithmetic circuit over a large prime field. Each invocation of SHA256 currently adds tens of thousands of multiplication gates, making it the primary cost during proving.

We have long been searching for algebraic primitives to replace SHA256 in some of these contexts. The natural environment for elliptic curve cryptography is large prime fields, and so if it can be made efficient inside of the zk-SNARK circuit we would have access to many useful primitives like Pedersen commitments.

Introducing Jubjub

Kosba et al. previously explored fixed-base exponentiation performance inside of zk-SNARK circuits, achieving performance of 6 constraints per bit of the scalar. Earlier this year, we beat this record with a performance of 4.2 constraints per bit of the scalar, using some algebraic tricks and an embedded curve we are calling Jubjub.

Jubjub is a twisted Edwards curve of the form :math:`-x^2 + y^2 = 1 + d x^2 y^2` built over the BLS12-381 scalar field, with :math:`d = -(10240/10241)`. Being a twisted Edwards curve, it has a complete addition law that avoids edge cases with doubling and identities, making it convenient to work with inside of an arithmetic circuit.

`(x_1, y_1) + (x_2, y_2) = (\frac{x_1 y_2 + y_1 x_2}{1 + d x_1 x_2 y_1 y_2}, \frac{y_1 y_2 + x_1 x_2}{1 – d x_1 x_2 y_1 y_2})`

Rather than reaching for projective coordinate systems, we can leverage the fact that field inversion is “as cheap” as field multiplication inside of the zk-SNARK circuit to apply the law directly. Additionally, multiplication by constants (as in fixed-base exponentiation) is virtually free. We can leverage these properties to express conditional addition (over :math:`b`) by known :math:`(x_2, y_2)` in just five quadratic constraints:

  • `(x_1) \times (y_1) = (\beta)`
  • `(1 + d x_2 y_2 \beta) \times (x_3) = (x_1 y_2 + y_1 x_2)`
  • `(1 – d x_2 y_2 \beta) \times (y_3) = (y_1 y_2 + x_1 x_2)`
  • `(b) \times (x_3 – x_1) = (x_F – x_1)`
  • `(b) \times (y_3 – y_1) = (y_F – y_1)`

In practice, we need to boolean constrain each bit of the scalar, giving a total of around 1506 multiplication gates with a scalar size of 251 bits. This is roughly 6 constraints per bit, as in Kosba et al..

It turns out that we can do better than this. Instinctively it would seem that windowed exponentiation would be expensive in this context, because you would have to resort to general point additions…

  • `(x_1) \times (y_2) = (\beta)`
  • `(y_1) \times (x_2) = (\gamma)`
  • `(y_1) \times (y_2) = (\delta)`
  • `(x_1) \times (x_2) = (\epsilon)`
  • `(\delta) \times (\epsilon) = (\tau)`
  • `(1 + d \tau) \times (x_3) = (\beta + \gamma)`
  • `(1 – d \tau) \times (y_3) = (\delta + \epsilon)`

in addition to a window table lookup, and the boolean constraints of the scalar bits.

Constant window table lookups inside the circuit can be achieved using polynomials which vanish at the constants that aren’t being selected. As an example, given the table of constants :math:`(c_0, c_1, c_2, c_3)` with bits :math:`b_0` and :math:`b_1`, and the selected constant :math:`r` can be evaluated with the constraint:

  • `(b_0) \times (- c_0 + c_0 b_1 + c_1 – c_1 b_1 – c_2 b_1 + c_3 b_1) = (r – c_0 + c_0 b_1 – c_2 b_1)`

This technique can be applied for larger window tables, but the multiplicative depth of the evaluation increases exponentially. There happens to be a sweet spot at a window table of 4, which allows us to achieve approximately 4.2 constraints per bit.

Building Primitives

Now that we have fast ECC in the circuit, we can use Pedersen commitments for our notes rather than SHA256. We can also change our addresses to be fully asymmetric, making them smaller and more flexible.

But most importantly, we can build a hash function whose collision-resistance reduces to the discrete log of the Jubjub curve. Given a vector of random generators, :math:`\vec{g}^n`, we can define a CRH over a vector of bits :math:`\vec{b}^n` as the abscissa of :math:`\prod^n{g_i^{b_i}}`. It is trivial to prove that finding a collision requires an adversary to defeat the discrete log.

It is convenient to have a relatively inexpensive collision resistant hash function which reduces to the discrete log in this context, as it allows us to consolidate our space of cryptographic assumptions.

Implementation

We’ve written a prototype of Jubjub for benchmarking purposes, and as with everything we do, it is under a permissive open-source license and unencumbered by patents.