Lattice-Based zk-SNARKs from Square Span Programs

Zero-knowledge SNARKs (zk-SNARKs) are non-interactive proof systems with short and efficiently verifiable proofs. They elegantly resolve the juxtaposition of individual privacy and public trust, by providing an efficient way of demonstrating knowledge of secret information without actually revealing it. To this day, zk-SNARKs are being used for delegating computation, electronic cryptocurrencies, and anonymous credentials. However, all current SNARKs implementations rely on pre-quantum assumptions and, for this reason, are not expected to withstand cryptanalitic efforts over the next few decades. In this work, we introduce the first designated-verifier zk-SNARK based on lattice assumptions, which are believed to be post-quantum secure. We provide a generalization in the spirit of Gennaro et al. (Eurocrypt'13) to the SNARK of Danezis et al. (Asiacrypt'14) that is based on Square Span Programs (SSPs) and relies on weaker computational assumptions. We focus on designated-verifier proofs and propose a protocol in which a proof consists of just 5 LWE encodings. We provide a concrete choice of parameters as well as extensive benchmarks on a C implementation, showing that our construction is practically instantiable.


Introduction
In a zero-knowledge proof, a powerful prover P can prove to a weaker verifier V that a particular statement x P L is true, for some NP language L (with corresponding witness relation R), without revealing any additional information about the witness.For NP languages, P can be a polynomial time machine with input also the witness w that x P L (the witness is a proof that x P L, i.e.Rpx, wq holds, but is not a zero-knowledge proof, since it reveals more information than just the mere fact that x P L).Since their introduction in [GMR89] zero-knowledge (ZK) proofs have been shown to be a very powerful instrument in the design of secure cryptographic protocols.
For practical applications, researchers immediately recognized two limiting factors in zeroknowledge proofs: the original protocols were interactive and the proof could be as long as (if not longer than) the witness.Non-interactive zero-knowledge (NIZK) proofs [BFM88] and succinct ZK arguments [Kil92,Mic94] were introduced shortly thereafter.Those results were considered mostly theoretical proofs of concept until more recently, when several theoretical and practical breakthroughs have shown that such proofs (renamed zk-SNARGs for Succinct Non-interactive ARGuments or zk-SNARKs if the proofs also guarantee that the Prover knows the witness w) can indeed be used in practical applications.
Gennaro, Gentry, Parno and Raykova [GGPR13] proposed a new, influential characterization of the complexity class NP using Quadratic Span Programs (QSPs), a natural extension of span programs defined by Karchmer and Wigderson [KW93].They show there is a very efficient reduction from boolean circuit satisfiability problems to QSPs.Their work has lead to fast progress towards practical verifiable computations.For instance, using Quadratic Arithmetic Programs (QAPs), a generalization of QSPs for arithmetic circuits, Pinocchio [PHGR13] provides evidence that verified remote computation can be faster than local computation.At the same time, their construction is to guarantee anonymity while preventing double-spending (via the ZK property).
The QSP approach was generalized in [BCI `13] under the concept of Linear PCP (LPCP) (there is a construction of an LPCP for a QSP satisfiability problem) -these are a form of interactive ZK proofs where security holds under the assumption that the prover is restricted to compute only affine combinations of its inputs.These proofs can then be turned into (designated-verifier) SNARKs by using a linear-only encryption, i.e. an encryption scheme where any adversary can output a valid new ciphertext, only if this is an affine combination of some previous encodings that the adversary had as input (intuitively this "limited malleability" of the encryption scheme, will force the prover into the above restriction).
So far all known practical SNARKs rely on "classical" pre-quantum assumptions4 .Yet, widely deployed systems relying on SNARKs (such as the Zcash cryptocurrency [BCG `14b]) are expected not to withstand cryptanalitic efforts over the course of the next 10 years [ABL `17, Appendix C].It is an interesting research question, as well our responsibility as cryptographers, to provide protocols that can guarantee people's privacy over the next decade.We attempt to make a step forward in this direction by building a designated-verifier zk-SNARK that relies on the Learning With Errors (LWE) assumption, initially proposed by Regev in 2005 [Reg05], and right now the most widespread post-quantum cryptosystem supported by a theoretical proof of security.
SNARKs based on lattices.Recently, in two companion papers [BISW17,BISW18], Boneh et al. provided the first designated-verifier SNARKs construction based on lattice assumptions.
The first paper has two main results: an improvement on the LPCP construction in [BCI `13] and a construction of linear-only encryption based on LWE.The second paper presents a different approach where the information-theoretic LPCP is replaced by a LPCP with multiple provers, which is then compiled into a SNARK again via linear-only encryption.The main advantage of this approach is that it reduces the overhead on the prover achieving what they call quasi-optimality5 .
Our contributions.In this paper, we frame the construction of Danezis et al. [DFGK14] for Square Span Programs in the framework of "encodings" introduced by Gennaro et al. [GGPR13].
We slightly modify the definition of encoding to accommodate for the noisy nature of LWE schemes.This allows us to have a more fine-grained control over the error growth, while keeping previous examples of encodings still suitable for our construction.Furthermore, SSPs are similar but simpler than Quadratic Span Programs (QSPs) since they use a single series of polynomials rather than 2 or 3. We use SSPs to build simpler and more efficient designated-verifier SNARKs and Non-Interactive Zero-Knowledge arguments (NIZKs) for circuit satisfiability (CIRC-SAT).
We think our approach is complementary to [BISW17,BISW18].However, there are several reasons why we believe that our approach is preferable: -Zero-Knowledge.The LPCP-based protocols in [BISW17,BISW18] are not ZK and the works do not explicitly describe ways to make them ZK (except by referring to generic transformations).Considering the LPCP constructed for a QSP satisfiability problem, there is a general transformation to obtain ZK property [BCI `13], but this introduces some overhead.Nevertheless, in the lattice setting, we are not sure this approach still holds.In contrast, our protocol is SSP-based and can thus be made ZK at essentially no cost for either the Prover or the Verifier.Our transformation is different, exploiting special features of SSPs, and yields a zk-SNARK with almost no overhead (if an adapted encoding is used).
-Weaker Assumptions.The linear-only property introduced in [BCI `13] implies all the security assumptions needed by a SSP-suitable encoding, but the reverse is not known to hold.Our proof of security therefore relies on weaker assumptions, and by doing so, "distills" the minimal known assumptions needed to prove security for SSP, and instantiates them with an LWE-based approach.
-Simplicity and Efficiency.While the result in [BISW18] seems asymptotically more efficient than any SSP-based approach, we suspect that, for many applications, the simplicity and efficiency of the SSP construction will still provide a concrete advantage in practice.To drive this point home we have been implementing our scheme and testing it on real-life applications and the results are encouraging (on the other hand no implementation is offered for [BISW17,BISW18] pointing to the theoretical nature of those results).
Technical challenges.Although conceptually similar to the original proof of security for QSPbased SNARKs, our proof encounters some specific technical challenges due to the noise growth of the LWE-based encoding.In particular these impose additional LWE-specific verification checks not needed in a "pure" QSP implementation.Such issues arise from the reduction to the weaker assumptions used in our proofs and are not needed in [BISW17,BISW18] because of the stronger linear-only assumption used there.Additionally, we incorporate some optimizations from SSPbased SNARKS [DFGK14].
Instantiating our encoding scheme with a lattice-based scheme like Regev encryption, differs from [GGPR13] and introduces some technicalities, first in the verification step of the protocol, and second in the proof of security.Our encoding scheme is additively homomorphic and supports affine operations.On the other hand, we are constrained to only allow for a limited number of homomorphic operations because of the bounded error growth in lattice-based encryption schemes.Since in these schemes the error is additive, to compute a linear combination of N encodings (where the coefficients for the linear combination are drawn from a field F " Z p ), we need to scale some parameters for correctness to hold.However, if the encryption scheme supports modulus switching, it may be possible to work with a smaller modulus during decoding.Anyway, we will consider in this work that we are allowed to perform just a bounded number of "linear" operations on encodings and make sure that this bound is sufficient to perform verification and to make a security reduction.Furthermore, the operations considered are affine rather than linear.The main reason for this adaptation is that the description is more appropriate for our proposed lattice-based encoding (in which a careful analysis of the noise growth needs to be made).

Prerequisites
Notation.We denote the real numbers by R, the natural numbers by N, the integers by Z and the integers modulo some q by Z q .Let λ P N be the computational security parameter, and κ P N the statistical security parameter.For two integers a, b P Z, we denote with a{{b the quotient of the Euclidean division between a and b.We say that a function is negligible in λ, and we denote it by neglpλq, if it is a f pλq " o pλ ´cq for every fixed constant c.We also say that a probability is overwhelming in λ if it is 1 ´neglpλq.We let M.rlpλq be a length function (i.e. a function N Ñ N polynomially bounded) in λ defining the length of the randomness for a probabilistic interactive Turing Machine M. When sampling uniformly at random the value a from the set S, we employ the notation a Ð$ S. When sampling the value a from the probabilistic algorithm M, we employ the notation a Ð M. We use :" to denote assignment.For an n-dimensional column vector a, we denote its i-th entry by a i .In the same way, given a polynomial f , we denote its i-th coefficient by f i .Unless otherwise stated, the norm }¨} considered in this work is the 2 norm.We denote by a ¨ b the dot product between vectors a and b.For a NP relation R between a set of statements denoted by u and witnesses denoted by w: we use LpRq to denote the language associated to R.
Unless otherwise specified, all the algorithms defined throughout this work are assumed to be probabilistic Turing machines that run in time polypλq -i.e.PPT.An adversary is denoted by A; when it is interacting with an oracle O we write A O .For two PPT machines A, B, with the writing pA}Bqpxq we denote the execution of A followed by the execution of B on the same input x and with the same random coins.The output of the two machines is concatenated and separated with a semicolon, e.g., pout A ; out B q Ð pA}Bq pxq.

Square Span Programs
We characterize NP as Square Span Programs (SSPs) over some field F of order p. SSPs were introduced first by Danezis et al. [DFGK14].
Definition 1 (SSP).A Square Span Program (SSP) over the field F is a tuple consisting of m`1 polynomials v 0 pxq, . . ., v m pxq P Frxs and a target polynomial tpxq such that degpv i pxqq ď degptpxqq for all i " 0, . . ., m.We say that the square span program ssp has size m and degree d " degptpxqq.We say that ssp accepts an input a 1 , . . ., a P t0, 1u if and only if there exist a `1, . . ., a m P t0, 1u satisfying: We say that ssp verifies a boolean circuit C : t0, 1u Ñ t0, 1u if it accepts exactly those inputs pa 1 , . . ., a q P t0, 1u that satisfy Cpa 1 , . . ., a q " 1.
Universal circuit.In the definition, we may see C as a logical specification of a satisfiability problem.In our zk-SNARK we will split the inputs into u public and w private inputs to make it compatible with universal circuits C U : t0, 1u u ˆt0, 1u w Ñ t0, 1u, that take as input an u -bit description of a freely chosen circuit C and an w -bit value w and return 1 if and only if Cpwq " 1.Along the lines of [DFGK14], we consider the "public" inputs from the point of view of the prover.For an outsourced computation, they might include both the inputs sent by the clients and the outputs returned by the server performing the computation.For CIRC-SAT, they may provide a partial instantiation of the problem or parts of its solution.This treatment is more general than CIRC-SAT, for which u " 0 -since the SSP is satisfied if the witness w satisfies Cpwq " 1.
Theorem 2 ([DFGK14, Theorem 2]).For any boolean circuit C : t0, 1u Ñ t0, 1u of m wires and n fan-in 2 gates and for any prime p ě maxpn, 8q, there exist polynomials v 0 pxq, . . ., v m pxq and distinct roots r 1 , . . ., r d P F such that C is satisfiable if and only if: where a 1 , . . ., a m P t0, 1u correspond to the values on the wires in a satisfying assignment for the circuit.Define tpxq :" ś d i"1 px ´ri q, then for any circuit C : t0, 1u Ñ t0, 1u of m wires and n gates, there exists a degree d " m `n square span program ssp " pv 0 pxq, . . ., v m pxq, tpxqq over a field F of order p that verifies C. SSP generation.We consider the uniform probabilistic algorithm SSP, that on input a boolean circuit C : t0, 1u Ñ t0, 1u of m wires and d gates, chooses a field F, with |F| ě maxpn, 8q, and samples d " m`n random elements r 1 , . . ., r d P F to define the target polynomial tpxq " ś d i"1 pxŕ i q, together with the set of polynomials tv 0 pxq, . . ., v m pxqu composing the SSP corresponding to C. pv 0 pxq, . . ., v m pxq, tpxqq Ð SSPpCq

Succinct Non-Interactive Arguments
In this section we provide formal definitions for the notion of succinct non-interactive arguments of knowledge (SNARKs).
Definition 3. A designated-verifier non-interactive proof system for a relation R is a triple of algorithms Π " pG, P, Vq as follows: pvrs, crsq Ð Gp1 λ , Rq takes as input some complexity 1 λ and outputs a common reference string crs that will be given publicly, and vrs, a trapdoor key that will be used for verification.For simplicity, we will assume in the future that crs can be extracted from vrs, and that the unary complexity 1 λ can be derived as well from crs. π Ð Ppcrs, u, wq takes as input the crs, a statement u and a witness w, and outputs some proof of knowledge π. bool Ð Vpvrs, u, πq takes as input a statement u together with a proof π, and the trapdoor key vrs, and outputs true if the proof was accepted, false otherwise.
If the verification algorithm V takes as input the CRS instead of vrs, then the NI proof system is called publicly verifiable.
Definition 4 (SNARK).A succinct non-interactive argument of knowledge (SNARK) is a noninteractive proof system that satisfies the additional properties of completeness, succinctness, and knowledge soundness.
Roughly speaking, completeness means that all correctly generated proofs verify; succinctness that the size of the proof is linear in the security parameter λ; knowledge soundness [BG93] that for any prover able to produce a valid proof for a statement in the language, there exists an efficient algorithm capable of extracting a witness for the given statement.More formally: Definition 5 (Completeness).A non-interactive proof system Π for the relation R is (computationally) complete if for any PPT adversary A: where COMPL Π,R,A pλq is the game depicted in Fig. 1.
Definition 6 (Knowledge Soundness).A non-interactive proof system Π for the relation R is knowledge-sound if for any PPT adversary A there exists an extractor Ext A such that: where KSND Π,R,A,Ext A pλq is defined in Figure 1.
Remark 7.An important consideration that arises when defining knowledge soundness in the designated-verifier scenario is whether the adversary should be granted access to a proof-verification oracle.Pragmatically, allowing a verification oracle captures whether or not a CRS can be reused poly pλq times.While this property follows immediately in the public-verifier setting, the same is not true for the designated-verifier setting.In the specific case of our construction, we formulate and prove our protocol with the stronger notion (which has been addressed to as strong soundness in the past [BISW17]), and quickly discuss which optimizations can take place when using the weaker notion of soundness.
We distinguish two types of arguments of knowledge: publicly verifiable ones, where the security holds against adversaries that have access to vrs; and those with designated verifier, where the verification step needs access to vrs.It is straightforward to note that, with the help of an encryption scheme, any publicly-verifiable proof system can be transformed into an analogous designatedverifier one.It is nonetheless important to note that in the standard model, all constructions we are aware of so far somehow imply the existence of an encryption scheme.
A proof system Π for R is zero-knowledge if no information about the witness is leaked by the proof.More precisely: Definition 8 (Zero-Knowledge).A non-interactive proof system Π is zero-knowledge if there exists a simulator Sim such that for any PPT adversary A: where ZK Π,R,Sim,A pλq is defined in Figure 1.Zero-knowledge SNARKs are informally called zk-SNARKs.

Encoding Schemes
Definition 9 (Encoding Scheme).An encoding scheme Enc over a field F is composed of the following algorithms: ppk, skq Ð Kp1 λ q, a key generation algorithm that takes as input some complexity 1 λ and outputs some secret state sk together with some public information pk.To ease notation, we are going to assume the message space is always part of the public information and that pk can be derived from sk.
-S Ð Epaq, a non-deterministic encoding algorithm mapping a field element a to some encoding space S, such that ttEpaqu : a P Fu partitions S, where tEpaqu denotes the set of the possible evaluations of the algorithm E on a, that is tEpa; rq : r P E.rlpλqu.In other words, we require the decoding algorithm D to be a function.
Depending on the encoding algorithm, E will require either the public information pk generated from K or the secret state.For our application targeted at designated-verifier proofs it will be the case of sk.To ease notation, we will omit this additional argument.
The above algorithms must satisfy the following properties: d-affinely homomorphic: there exists a poly pλq algorithm Eval that, given as input the public parameters pk, a vector of encodings pEpa i qq d i , coefficients c " pc i q d i P F and constant factor b P F, outputs a valid encoding of a ¨ c `b with probability overwhelming in λ.If the constant factor is omitted, it is assumed to be 0. quadratic root detection: there exists an efficiently computable algorithm Qpδ, ppq that, given as input some parameter δ (either the public information pk or the verification key sk, depending on the kind of verifier), can test if the evaluation of quadratic polynomial pp with coefficients in the field is zero.image verification: there exists an efficiently computable algorithm P that, given as input some parameter δ (again, either pk or sk), can distinguish if an element c is a correct encoding of a field element.
Sometimes, in order to ease notation, we will employ the writing ct :" EvalppEpa i qq, pc i q i q " Epcq actually meaning that ct is a valid encoding of c " ř a i c i , that is ct P tEpcqu.It will be clear from the context (and the use of symbol for assignment instead of that for sampling) that the randomized encoding algorithm is not actually invoked.
Decoding algorithm.When using an encryption scheme in order to instantiate an encoding scheme, we can naturally define the decoding algorithm D that simply takes advantage of the decryption procedure.Encoding schemes that only need the public parameters pk to perform quadratic root detection and image verification lead to a SNARK that is publicly verifiable.Encoding schemes that rely on the secret state sk -as those we focus on in this work -lead instead to designated-verifier proofs.More specifically, since we study encoding schemes derived from encryption functions, quadratic root detection for designated-verifiers is trivially obtained by using the decoding algorithm D.
Remark 10.Our specific instantiation of the encoding scheme presents some slight differences with [GGPR13].First, we allow only for a limited number of homomorphic operations because of the error growth in lattice-based encoding schemes.Furthermore, these operations are affine rather than linear.The main reason for this adaptation is that the description is more apt for our proposed lattice-based encoding (in which a careful analysis of the noise growth needs to be made), and that at the same time it does not exclude previous constructions.
The reason for allowing affine operations rather limiting ourselves to only linear is a mere technicality.The inhomogeneous part can always be constructed for linear-only schemes by adding Ep1q to the public information pk, which, as a matter of fact, happens to be already present in all previous encoding schemes.For example, in pairing-based encodings this is just the group generator, and it is usually included already in the pairing group description.The converse cannot be said about Regev encryption where, given Epmq, it is always possible to compute a valid encoding of m`1 without any additional information.Furthermore, the bounds on the number of allowed linear operations, those can simply be considered 8 for the encodings provided in the past [GGPR13].
In order to guarantee a security reduction of our construction of Section 4, we will have to guarantee that some encoding provided by the adversary is not "too noisy" and that it is still possible to perform homomorphic operations on it.Let us consider a function test-error which, given as input the secret state sk together with some encoding ct, returns true or false depending on whether it is still possible to compute a certain linear operation known in advance.Since the function takes as input the secret key itself, it is easy to build such a function relying just on the Eval and P -image verification -algorithms.
Example 11.We present the classical example of encoding scheme using symmetric pairings on elliptic curves.The asymmetric variant of this encoding scheme is the most classical example of zk-SNARKs.Consider the cyclic groups G, G T of the same prime order p equipped with the bilinear non-degenerate map e : G ˆG Ñ G T .The groups G, G T are generated respectively by G P G and by epG, Gq P G T .For instance, the family of elliptic curves G :" EpF q q. described in [BF01] satisfies the above description.The encoding scheme simply computes E : x Þ Ñ xG.The public information pk consists of the pairing group description Γ :" pp, G, G T , e, Gq; the secret state sk is set to K.This encoding satisfies the three requirements as follows: d-affine homomorphic evaluations between a vector of encodings pEpa i qq d i with the coefficients pc i q d i and constant term b is done as follows: In other words, the Eval algorithm simply outputs the group element -The efficiently-computable quadratic root detection algorithm Q simply consists of the pairing e : G ˆG Ñ G T and the quadratic test takes place in the target group G T .More concretely, given encodings pEpa i qq d i , use the bilinear map to compute epG, Gq pppa1,...,a d q where pp is a quadratic polynomial, and check whether it equals the identity element in G T .
-Image verification is straightforward.A group element P is an encoding of an element s in G iff P " sG " Epsq.
A more concrete encoding scheme will be discussed in Section 3. In particular, we conjecture that it satisfies the assumptions of the following section.

Assumptions
Throughout this paper we rely on a number of computational assumptions.All of them have been introduced in the past (e.g., [GGPR13]): we report them here for completeness and in order to explore the relations between them.
The q-power knowledge of exponent assumption (q-PKE) is a generalization of the knowledge of exponent assumption (KEA) introduced by Damgard [Dam92].It says that given Epsq, . . ., Eps q q and Epαsq, . . ., Epαs q q for some coefficient α, it is difficult to generate ct, p ct encodings of c, αc without knowing the linear combination of the powers of s that produces ct.
Assumption 1 (q-PKE).The q-Power Knowledge of Exponent (q-PKE) assumption holds relative to an encoding scheme Enc and for the class Z of auxiliary input generators if, for every non-uniform polynomial time auxiliary input generator z P Z and non-uniform PPT adversary A, there exists a non-uniform extractor Ext such that: where q-PKE Enc,A,Ext A pλq is the game depicted in Figure 2.
The q-PDH assumption has been a long-standing, standard q-type assumption [Gro10, BBG05], Basically it states that given pEpsq, . . ., Eps q q, Eps q`2 q, . . ., Eps 2q q, it is hard to compute an encoding of the missing power Eps q`1 q.
Finally, we need another assumption to be able to "compare" encoded messages.The q-PKEQ assumption boils down to the question of whether A can output pEpcq, eq without Ext A being able to tell whether e is also an encoding of c.
Assumption 3 (q-PKEQ).The q-Power Knowledge of Equality (q-PKEQ) assumption holds for the encoding scheme Enc if for every PPT adversary A there exists an extractor Ext A such that: where q-PKEQ Enc,A,Ext A pλq is the game depicted in Figure 2.
This last assumption is needed solely in the case where the attacker has access to a verification oracle (see Remark 7).Since the encoding could be non-deterministic, the simulator in the security reduction ofSection 5.2 needs to rely on q-PKEQ to simulate the verification oracle.Pragmatically, this assumption allows us to test for equality of two encoded messages even without having access to the secret key.

Lattice-based encodings
In this section we give a brief introduction to lattices and we describe a possible encoding scheme based on lattice assumptions.
Lattices.A m-dimensional lattice Λ is a discrete additive subgroup of R m .For an integer k ă m and a rank k matrix B P R mˆk , Λ pBq " B x P R m | x P Z k ( is the lattice generated by the columns of B.
Game dLWE Pg,A pλq Γ :" pp, q, n, αq :" Gaussian distribution.For any σ P R `, let ρ σ p xq :" e ´π} x} 2 {σ 2 be the Gaussian function over R n with mean 0 and parameter σ.For any discrete subset D Ď R n we define ρ σ pDq :" ř xPD ρ σ p xq, the discrete integral of ρ σ over D. We then define χ σ , the discrete Gaussian distribution over D with mean 0 and parameter σ as: We denote by χ n σ the discrete Gaussian distribution over R n where each entry is independently sampled from χ σ .
Lattice-based Encoding Scheme.We propose an encoding scheme Enc that consists in three algorithms as depicted in Figure 4.This is a slight variation of the classical LWE cryptosystem initially presented by Regev [Reg05], described by parameters Γ :" pq, n, p, αq, with q, n, p P N and 0 ă α ă 1.This construction is an extension of the one presented in [BV11].
We assume the existence of a deterministic algorithm Pg that, given as input the security parameter in unary 1 λ , outputs a LWE encoding description Γ .Similar assumptions have been used in the past by Bellare et al. [BFS16] for bilinear group descriptions.The main advantage in choosing Pg to be deterministic is that every entity can (re)compute the description for the security parameter, and that no single party needs to be trusted with generating the encoding parameters.Moreover, real-world encodings have fixed parameters for some well-known values of λ.For the sake of simplicity, we define our encoding scheme with a LWE encoding description Γ and assume that the security parameter λ can be derived from Γ .
Roughly speaking, the public information is constituted by the LWE parameters Γ and an encoding of m is simply an LWE encryption of m.The LWE secret key constitutes the secret state of the encoding scheme.We say that the encoding scheme is (statistically) correct if all valid encodings are decoded successfully (with overwhelming probability).
Assumption 4 (dLWE).The decisional Learning With Errors (dLWE) assumption holds for the parameter generation algorithm Pg if for any PPT adversary A: where dLWE Pg,A pλq is defined as in Figure 3.
In [Reg05], Regev showed that solving the decisional LWE problem is as hard as solving some lattice problems in the worst case.We recall here this result: Theorem 12 (Hardness of dLWE [Reg05]).For any parameter generation algorithm Pg outputting p " poly pλq, a modulus q ď 2 polypnq , and a (discretized) Gaussian error distribution parameter σ " αq ě 2 ?n with 0 ă α ă 1, solving dLWE Pg,A pλq is at least as hard as solving GapSVP Õpn{αq .Definition 13.An encoding scheme Enc is correct if, for any s Ð Gp1 λ q and m P Z p : PrrDp s, Ep s, mqq ‰ ms " neglpλq .
We say that an encoding ct of a message m under secret key s is valid if D p s, ctq " m.We say that an encoding is fresh if it is generated through the E algorithm.We say that an encoding is stale if it is not fresh.
Lemma 14 (Correctness).Let ct " p´ a, a ¨ s `pe `mq be an encoding.Then ct is a valid encoding of a message m P Z p if e ă q 2p .
Image verification and quadratic root detection can be implemented using D, providing the secret key as input.The algorithm P for image verification proceeds as follows: decrypts the encoded element and tests for equality between the two messages.The algorithm Q for quadratic root detection is straightforward: decrypt the message and evaluate the polynomial, testing if it is equal to 0. Given a vector of d encodings ct P Z dˆpn`1q q , a vector of coefficients c P Z d p and a constant b P Z p , the homomorphic evaluation algorithm is defined as follows: Eval ` ct, c, b ˘:" c ¨ ct `b.As previously mentioned, whenever b is omitted from the arguments of Eval, we implicitly mean b " 0. During the homomorphic evaluation the noise grows as a result of the operations which are performed on the encodings.Consequently, in order to ensure that the output of Eval is still a valid encoding, we need to start with a sufficiently small noise in each of the initial encodings.
In order to bound the size of the noise, we first need a basic theorem on the tail bound of discrete Gaussian distributions due to Banaszczyk [Ban95]: Lemma 15 ([Ban95, Lemma 2.4]).For any σ, T P R `and a P R n : At this point, this corollary follows: Corollary 16.Let s Ð$ Z n q be a secret key and m " pm 1 , . . ., m d q P Z d p be a vector of messages.Let ct be a vector of d fresh encodings so that ct i Ð E p s, m i q, and c P Z d p be a vector of coefficients.If q ą 2p 2 σ b κd π , then Eval ` c, ct ˘outputs a valid encoding of m ¨ c under the secret key s with probability overwhelming in κ.
Proof.The fact that the message part is m ¨ c is trivially true by simple homomorphic linear operations on the encodings.Then the final encoding is valid if the error does not grow too much during these operations.Let e P Z d p be the vector of all the error terms in the d encodings, and let T " a κ{π.Then by Lemma 15 we have: Pr For correctness we need the absolute value of the final noise to be less than q{2p (cf.Lemma 14).Since it holds that @ c P Z d p , } c } ď p ? d, we can state that correctness holds if which gives q ą 2p 2 σ c κd π .l Kp1 λ q Γ :" pp, q, n, αq :" Pgp1 λ q s Ð$ Z n q return pΓ, sq Ep s, mq Γ :" pp, q, n, αq :" Pgp1 λ q a Ð$ Z n q σ :" qα; e Ð χσ return p´ a, a ¨ s `pe `mq Dp s, p c0, c1qq Γ :" pp, q, n, αq :" Pgp1 λ q return p c0 ¨ s `c1q pmod pq Smudging.When computing a linear combination of encodings, the distribution of the error term in the final encoding depends on the coefficients of the combination, and it could therefore potentially leak information to whoever holds the secret key.We can solve this problem with the well known technique of noise smudging (or flooding): roughly speaking, adding a term large enough to the noise cancels out any dependency on the coefficients we want to hide.
Lemma 17 (Noise Smudging, [BGGK17]).Let B 1 " B 1 pκq and B 2 " B 2 pκq be positive integers.Let x P r´B 1 , B 1 s be a fixed integer and y Ð$ r´B 2 , B 2 s.Then the distribution of y is statistically indistinguishable from that of y `x, as long as B 1 {B 2 " neglpκq.
Proof.Let ∆ denote the statistical distance between the two distributions.By its definition: |Pr ry " vs ´Pr ry " v ´xs| " 1 2 The result follows immediately.l In order to preserve the correctness of the encoding scheme, we need once again q to be large enough to accommodate for the flooding noise.In particular, q will have to be at least superpolynomial in the statistical security parameter κ.
Corollary 18.Let s P Z n q be a secret key and m " pm 1 , . . ., m d q P Z d p be a vector of messages.Let ct be a vector of d encodings so that ct i is a valid encoding of m i , and c P Z d p be a vector of coefficients.Let e Eval be the noise in the encoding output by Eval ` ct, c ˘and B Eval a bound on its absolute value.Finally, let B sm " 2 κ B Eval , and e sm Ð$ r´B sm , B sm s.Then the statistical distance between the distribution of e sm and that of e sm `eEval is 2 ´κ.Moreover, if q ą 2p B Eval p2 κ `1q then the result of Eval ` ct, c ˘`´ 0, e sm ¯is a valid encoding of m ¨ c under the secret key s.
Proof.The claim on the statistical distance follows immediately from Lemma 17 and the fact that the message part is m ¨ c is true by simple homomorphic linear operations on the encodings.In order to ensure that the final result is a valid encoding, we need to make sure that the error in this output encoding remains smaller than q{2p.The final error is upper bounded by B Eval `Bsm , so we have Error testing.By making non-blackbox use of our LWE encoding scheme, it is possible to define an implementation of the function test-error (cf.Section 2) that will be used later in our construction in order to guarantee the existence of a security reduction.In fact, for LWE encodings, it is sufficient to use the secret key, recover the error, and enforce an upper bound on its norm (namely, the norm of the error must still allow for some homomorphic operations while holding correctness).A possible implementation of test-error is displayed in Figure 5.
Now we give a lemma that will be useful later during the security proof.It essentially defines the conditions under which we can take an encoding, add a smudging term to its noise, sum it with the output of an execution of Eval and finally multiply the result by an element in Z p .
Lemma 19 (For reduction).Let s, ct, c, e Eval , B Eval be as in Corollary 18, and let ct 1 " p´ a 1 , s ¨ a 1 `pe 1 `m1 q be a valid encoding of a message m 1 P Z p with noise e 1 bounded by B e .Let B sm " 2 κ B e and e sm Ð$ r´B sm , B sm s be a "smudging noise".Then, if q ą 2p 2 pp2 κ `1q B e `BEval q, it is possible to add the smudging term e sm to ct 1 , sum the result with the output of Eval ` ct, c ˘, multiply the outcome by a coefficient bounded by p, and obtain a valid encoding of k p m ¨ c `m1 q.
Proof.The correctness of the message part comes immediately from performing homomorphic linear operations on encodings, and the final output is valid if the noise remains below a certain threshold.After adding the smudging term and performing the sum, the noise term is at most B e `Bsm `BEval " p2 κ `1q B e `BEval .After the multiplication by a coefficient bounded by p, it is at most p pp2 κ `1q B e `BEval q.Thus, the encoding is valid if: which immediately gives the result.l Conditions on the modulus q.Corollaries 16 and 18 and Lemma 19 give the conditions that the modulus q has to respect in order to allow for all the necessary computations.In particular, Corollary 16 gives the condition to be able to homomorphically evaluate a linear combination of fresh encodings through the algorithm Eval; Corollary 18 gives the condition to be able to add a smudging noise to the result of such an evaluation; Lemma 19 gives a condition that will have to be satisfied in the security reduction.They are ordered from the least stringent to the most stringent, so the condition that must be satisfied in the end is the one given by Lemma 19.Let B e be a bound on the absolute value of e 1 , then the following must hold: Practical considerations.A single encoded value has size pn `1q log q " r Opλq.Therefore, as long as the prover sends a constant number of encodings, the proof is guaranteed to be (quasi) succinct.As a matter of fact, we can generate the random vector a that composes the first term of the encoding by extending the output of a seeded PRG.This has been proven secure in the random oracle model [Gal13].
Although the scheme requires the noise terms to be sampled from a discrete Gaussian distribution, for practical purposes we can sample them from a bounded uniform distribution (see, e.g., [MP13] for a formal assessment of the hardness of LWE in this case).In particular, given χ σ , one can choose a coefficient T P N such that Pr rx Ð χ σ : |x| ą T σs is as small as desired.Finally, the error is sampled from r´T σ, T σs.

Our designated-verifier zk-SNARK
Let Enc be an encoding scheme (Definition 9).Let C be some circuit taking as input an u -bit string and outputting 0 or 1.Let :" u ` w , where u is the length of the "public" input, and w the length of the private input.The value m corresponds to the number of wires in C and n to the number of fan-in 2 gates.Let d :" m `n.We will construct a zk-SNARK scheme for any functions u , w and families R λ of relations R on pairs pu, wq P t0, 1u u ˆt0, 1u w that can be computed by polynomial size circuits C with m wires and n gates.Our protocol is formally depicted in Figure 6.
Prover.The prover algorithm, on input some statement u :" pa 1 , . . ., a u q, computes a witness w :" pa u `1, . . ., a m q such that pu, wq " pa 1 , . . ., a m q is a satisfying assignment for the circuit C. The pa i q i are such that: as per Theorem 2.Then, it samples γ Ð$ F and sets νpxq :" v 0 pxq `řm i"1 a i v i pxq `γtpxq.Let: whose coefficients can be computed from the polynomials provided in the ssp.By affine evaluation it is possible to compute: In fact, H -respectively, p H -can be computed from the encodings of s, . . ., s d -respectively, αs, . . ., αs d -and the coefficients of Equation (5).The element p V can be computed from the encodings of αs, . . ., αs d .Finally, V w -respectively, B w -can be computed from the encodings of s, . . ., s d -respectively, βtpsq, βv u `1psq, . . ., βv m psq.All these affine evaluations involve at most d terms and the coefficients are bounded by p.Using the above elements, the prover returns a proof π :" pH, p H, p V , V w , B w q.
Verifier.Upon receiving a proof π and a statement u " pa 1 , . . ., a u q, the verifier proceeds with the following verifications.First, it uses the quadratic root detection algorithm of the encoding scheme Enc to verify that the proof satisfies: p h s ´αh s " 0 and p v s ´αv s " 0, (eq-pke) pv 2 s ´1q ´hs t s " 0, (eq-div) b s ´βw s " 0. (eq-lin) where ph s , p h s , p v s , w s , b s q are the values encoded in pH, p H, p V , V w , B w q :" π and v s is an encoding of v 0 `ř u i a i v i psq `ws as per Fig. 6.Then, the verifier checks whether it is still possible to perform some homomorphic operations, using the test-error procedure described in Section 2, and implemented in Figure 5 for the specific case of lattice encodings.More precisely, the verifier tests whether it is still possible to add another encoding and multiply the result by an element bounded by p, without compromising the correctness of the encoded element.This will guarantee the existence of a reduction in the knowledge soundness proof of Section 5.2.If all above checks hold, return true.Otherwise, return false.
Remark 20.Instantiating our encoding scheme on top of a "noisy" encryption scheme like Regev's introduces multiple technicalities that affect the protocol, the security proof, and the parameters' choice.For instance, in order to compute a linear combination of d encodings via Eval we need to scale down the error parameter and consequently increase the parameters q and n in order to maintain correctness and security.Similarly, for the proof to hold, we need the adversary to be able to perform the same amount of homomorphic operations both in the real protocol as well as in the reductions where we synthesize a CRS based on a q-PDH challenge.All these issues will be formally addressed in Section 6.

Proofs of security
In this section, we prove our main theorem: Theorem 21.If the q-PKE, q-PKEQ and q-PDH assumptions hold for the encoding scheme Enc, the protocol Π on Enc is a zk-SNARK with statistical completeness, statistical zero-knowledge and computational knowledge soundness.

Zero-Knowledge
To obtain a zero-knowledge protocol, we do two things: we add a smudging term to the noise of the encoding, in order to make the distribution of the final noise independent of the coefficients a i , and we add randomized factors of the target polynomial tpxq to the answers, in order to achieve zero-knowledge.
Proof (of zero-knowledge).The simulator for zero-knowledge is shown in Figure 7. Checking that the proof output by Sim indeed verifies is trivial.Statistical zero-knowledge follows immediately by observing that both a simulated argument and a real one follow the same distribution.First, we note that in the real world, since γ is chosen uniformly at random in F, so is γtpsq, because tpsq ‰ 0. Therefore, V w is an encoding of some uniformly random value w s .Once V w is fixed, the verification equation unequivocally defines B w (which is an encryption of βw s in both worlds), p V (which is an encryption of αv s for v s " v 0 psq `ři a i v i psq `ws in both worlds) and H, p H, which follow the same distribution in both worlds from for the same reasoning as above.l The zero-knowledge property is certainly interesting, but SNARKs are already appealing on their own, even without this feature.If we are only interested in building SNARKs (and not zk-SNARKs), we can simplify the protocol by removing γtpxq from the computation of hpxq.Also, we do not need to "smudge out" the noise anymore, which leads to better bounds on the noise growth.This means that we can scale down our encoding space and make the protocol more efficient.For this reason, in Table 1 we show some choices of parameters, both with and without the zero-knowledge requirement.In the same way, the simulator Sim must sample the noise from a distribution that is statistically close to the one used in the real world.Concretely, Corollary 18 guarantees that the smudged encoding output by the prover is statistically indistinguishable from the smudged simulated values.

Knowledge Soundness
Before diving into the technical details of the proof of soundness, we provide some intuition in an informal sketch of the security reductions: the CRS for the scheme contains encodings of Epsq, . . ., Eps d q, as well as encodings of these terms multiplied by some field elements α, β P F. The scheme requires the prover P to exhibit encodings computed homomorphically from such CRS.
The reason why we require the prover to duplicate its effort w.r.t.α is so that the simulator in the security proof can extract representations of p V , p H as degree-d polynomials vpxq, hpxq such that vpsq " v s , hpsq " h s , by the q-PKE assumption (for q " d).The assumption also guarantees that this extraction is efficient.This explains the first quadratic root detection check Equation (eq-pke) in the verification algorithm.
Suppose an adversary manages to forge a SNARK of a false statement and pass the verification test.Then, the soundness of the square span program (Theorem 2) implies that, for the extracted polynomials vpxq, hpxq and for the new defined polynomial v mid pxq :" vpxq ´v0 pxq ´ř u i a i v i pxq, one of the following must be true: i. hpxqtpxq ‰ v 2 pxq ´1, but hpsqtpsq " v 2 psq ´1, from Equation (eq-div); ii.v mid pxq R Spanpv u `1, . . ., v m q, but B w is a valid encoding of Epβv mid psqq, from Equation (eqlin).
If the first case holds, then ppxq :" pv 2 pxq ´1q ´hpxqtpxq is a nonzero polynomial of degree some k ď 2d that has s as a root, since the verification test implies pv 2 psq ´1q ´hpsqtpsq " 0. The simulator can use ppxq to solve q-PDH for q ě 2d ´1: using the fact that Ep0q " E `sq`1´k ppsq ȃnd subtracting off encodings of lower powers of s to get Eps q`1 q.To handle the second case, i.e., to ensure that v mid pxq is in the linear span of the v i pxq's with u ă i ď m we use an extra scalar β, supplement the CRS with the terms tEpβv i psqqu ią u , Epβtpsqq, and require the prover to present (encoded) βv mid psq in its proof.An adversary against q-PDH will choose a polynomial βpxq convenient to solve the given instance.More specifically, it sets βpxq with respect to the set of polynomials tv i pxqu ią u such that the coefficient for x q`1 in βpxqv mid pxq is non-zero.Then, for the values in the crs it uses β :" βpsq.All these allow it to run the SNARK adversary and to obtain from its output B w an encoding of s q`1 and thus solve q-PDH.
Proof (of computational knowledge soundness).Let A Π be the PPT adversary in the game for knowledge soundness (Figure 1) able to produce a proof π for which Π.V returned true.We first claim that it is possible to extract the coefficients of the polynomial vpxq corresponding to the values v s encoded in V .The setup algorithm first generates the parameters ppk, skq of an encoding scheme Enc and picks α, β, s P F, which are used to compute Epsq, . . ., Eps d q, Epαq, Epαsq, . . ., Epαs d q.Fix some circuit C, and let ssp be an SSP for C. Let A PKE be the d-PKE adversary, that takes as input a set of encodings: σ :" `pk, Epsq, . . ., Eps d q, Epαq, Epαsq, . . ., Epαs d q ˘.
The auxiliary input generator Z is a PPT machine that upon receiving as input σ, samples β Ð$ Z p , constructs the remaining terms of the CRS (as per Equation (4)), and outputs them in z.Thus, A PKE sets crs :" pssp}σ}zq and invokes A Π pcrsq.As a result, it obtains a proof π " pH, p H, p V , V w , B w q.On this proof, it computes: V :" Eval ˜pV w q, p1q, v 0 ` where pV w q -respectively p1q -is the vector containing only V w -respectively 1 -, and w s is the element encoded in V w .Finally, A PKE returns p p V , V q.If the adversary A output a valid proof, then by verification equation Eq. (eq-pke) it holds that the two encodings pV, p V q encode values v s , p v s such that p v s ´αv s " 0. Therefore, by q-PKE assumption there exists an extractor Ext PKE that, using the same input (and random coins) of A PKE , outputs a vector pc 0 , . . ., c d q P F d`1 such that V is an encoding of ř d i"0 c i s i and p V is an encoding of ř d i"0 αc i s i .In the same way, it is possible to recover the coefficients of the polynomial hpxq used to construct pH, p Hq, the first two elements of the proof of A Π (again, by Eq. (eq-pke)).
Our witness-extractor Ext Π , given crs, emulates the extractors Ext PKE above on the same input σ, using as auxiliary information z the rest of the CRS given as input to Ext Π .By the reasoning discussed above, Ext Π can recover pc 0 , . . ., c d q coefficients extracted from the encodings pV, p V q.Consider now the polynomial vpxq :" ř d i"0 c i x i .If it is possible to write the polynomial as vpxq " v 0 pxq `řm i a i v i pxq `δtpxq such that pa 1 , . . ., a m q P t0, 1u m satisfies the assignment for the circuit C with u " pa 1 , . . ., a u q, then the extractor returns the witness w " pa u `1, . . ., a m q.
With overwhelming probability, the extracted polynomial vpxq :" ř d i"0 c i x i does indeed provide a valid witness w.Otherwise, there exists a reduction to q-PDH that uses the SNARK adversary generate valid encodings pEpβv i psqqq i and Epβtpsqq using Eval.Note that, by construction of β, this evaluation is of d `1 elements in F and that the pq `1q-th power of s is never used.Now, since v mid pxq is not in the proper span, then the coefficient of degree q `1 of xapxqv mid pxq must be nonzero with overwhelming probability 1 ´1{|F|.The term B w of the proof must encode a known polynomial in s: ř 2q i"0 b i s i :" βv mid psq " apsqv mid psq where the coefficient b q`1 is non-trivial.B PDH can subtract off encodings of multiples of other powers of s to recover Eps q`1 q and break q-PDH.This requires an evaluation on fresh encodings: Adding the above to B w and multiplying by the inverse of the pq `1q-th coefficient (using once again Eval) will provide a solution to the q-PDH problem for q " d.
Since the two above cases are not possible by q-PDH assumption, Ext Π extracts a valid witness if the proof of A Π is valid.l As previously mentioned in Remark 7, the proof of knowledge soundness allows oracle access to the verification procedure.In the context of a weaker notion of soundness, the proof is almost identical, except that there is no need for the B PDH adversary to simulate the verification oracle relying on the q-PKEQ assumption.

Efficiency and concrete parameters
The prover's computations are bounded by the security parameter and the size of the circuit, i.e., P P O pλdq.As in [GGPR13,DFGK14], the verifier's computations depend solely on the security parameter, i.e., V P O pλq.The proof consists of a constant number (precisely, 5) of LWE encodings, i.e., |π| " 5 ¨r O pλq.
Using the propositions from Section 3 and knowing the exact number of homomorphic operations that need to be performed in order to produce a proof, we can now attempt at providing some concrete parameters for our encoding scheme.
For a first attempt at implementing our solution, we assume a weaker notion of soundness, i.e. that in the KSND game the adversary does not have access to a verification oracle (cf. Figure 1).Concretely, this means that the only bound in the size of p is given by the guessing probability of the witness, and the guessing of a field element.We thus fix p " 2 32 for the size of the message space.
The CRS is composed of encodings of different nature: some of them are fresh `Epsq, . . ., Eps d q ˘, some happen to be stale in the construction of A PKE and the construction of B PDH Section 5.2 (Item i.) (Epαsq, . . ., Epαs d q), and some are stale from the construction of B PDH Section 5.2 (Item ii.) (Epβtpsqq, pEpβv i psqqq i ).They are displayed in Figure 8. Since, as we have seen, B PDH manipulates the q-PDH challenge via homomorphic operations, we must guarantee that the protocol adversary can at least perform the same number of homomorphic operations as in the real-world protocol.Therefore, in the real protocol, we must intentionally increase the magnitude of the noise in the CRS: the terms Epαs i q (with i " 0, . . ., d) are generated by multiplying the respective fresh encoding Eps i q by a term bounded by p; the terms Epβtpsqq,tEpβv i psqqu i instead are generated via Eval of d `1 elements with coefficients bounded by p. Concretely, when encoding these elements using the encoding scheme of Section 3, the error for Epαs i q is sampled from p ¨χσ ; the error for Epβtpsqq, Epβv i psqqq is sampled from pp ?d `1q ¨χσ .The proof π consists of five elements pH, p H, p V , V w , B w q, as per Equation (6).H and V w are computed using an affine function on d encodings with coefficients modulo p; p H, p V are computed using a linear function on d `1 encodings with coefficients modulo p; finally, B w is computed using a linear combination of m ´ u encodings with coefficients in t0, 1u, except the last one which is βtpsq Ep s q Bw π Epb ¨sq`1 q Eps q`1 q βvipsq Ep s q Fig. 8. Summary of evaluations in the security proof.The leftmost part of the figure refers to the construction of adversaries for q-PKE and q-PDH; the central part refers to the protocol itself (i.e., the construction of the proof π); the rightmost part refers to the construction of the adversary for q-PDH (Section 5.2 -Item ii.).The syntax Eval rd, ps denotes an homomorphic evaluation on d encodings with coefficients in Zp.
Ep s q denotes the PDH challenge.
modulo p. Overall, the term that carries the highest load of homomorphic computations is B w .The generation of B w is outlined in Figure 8, and to it (as well as to the other proof terms) we add a smudging term so for constructing a zero-knowledge proof π.
In the construction of the adversary B PDH (Item ii.) we need to perform some further homomorphic operations on the proof element B w in order to solve the q-PDH challenge, namely one addition (Equation (8)) and one multiplication by a known scalar b bounded by p.The result of the first operation is denoted by Epb ¨sq`1 q in Figure 8; the final result is the solution to the q-PDH challenge.
We now outline the calculations that we use to choose the relevant parameters for our encoding scheme.In particular, we will focus on the term B w since, as already stated, it is the one that is involved in the largest number of homomorphic operations.The chain of operations that need to be supported is depicted in Figure 8: we now analyze them one by one.The correctness of the other terms follows directly from Corollary 16.First of all, the terms pβv i psqq iPrm´ us and βtpsq are produced through the algorithm Eval executed on d `1 fresh encodings with coefficients modulo p.Let σ be the discrete Gaussian parameter of the noise terms in fresh encodings; then, by Pythagorean additivity, the Gaussian parameter of encodings output by this homomorphic evaluation is σ Eval :" pσ ?d `1.Then the term βtpsq is multiplied by a coefficient in Z p , and the result is added to a subset sum of the terms pβv i psqq i , i.e., a weighted sum with coefficients in t0, 1u.It is trivial to see that, for the first term, the resulting Gaussian parameter is bounded by pσ Eval , whereas for the second term it is bounded by σ Eval ?m ´ u .The parameter of the sum of these two terms is then bounded by σ Bw :" σ Eval a p 2 `m ´ u .Let us then consider a constant factor T for "cutting the Gaussian tails", i.e., such that the probability of sampling from the distribution and obtaining a value with magnitude larger than T times the standard deviation is as small as desired.We can then write that the absolute value of the error in B w is bounded by T σ Bw .At this point we add a smudging term, which amounts to multiplying the norm of the noise by p2 κ `1q (cf.Corollary 18).Finally, the so-obtained encoding has to be summed with the output of an Eval invoked on 2d fresh encodings with coefficients modulo p and multiplied by a constant in Z p .It is easy to calculate that the final noise is then bounded by T pσ Bw p2 κ `1q `T pσ Eval (cf.Lemma 19).By substituting the values of σ Eval , σ Bw , remembering that σ :" αq and imposing the condition for having a valid encoding, we obtain T p 2 αq ?d `1 ´ap 2 `m ´ u p2 κ `1q `1¯ă q 2p .
The above corresponds to Equation (3) with bounds Be :" T σ Bw and B Eval :" T σ Eval .By simplifying q and isolating α, we get: With our choice of parameters and by taking T " 8, we can select for instance α " 2 ´180 .Once α and p are chosen, we select the remaining parameters q and n in order to achieve the desired level of security for the LWE encoding scheme.To do so, we take advantage of Albrecht's estimator [APS15] which, as of now, covers the following attacks: meet-in-the-middle exhaustive search, coded-BKW [GJS15], dual-lattice attack and small/sparse secret variant [Alb17], latticereduction with enumeration [LP11], primal attack via uSVP [AFG14, BG14], Arora-Ge algorithm [AG11] using Gröbner bases [ACFP14].Some possible choices of parameters are reported in Table 1.
Finally, based on these parameters, we can concretely compute the size of the CRS7 and that of the proof π.The CRS is composed of d `pd `1q `pm `1q encodings, corresponding to the encodings of the d powers of s, the encodings of α multiplied by the d `1 powers of s, the m encodings of pβv i q i , and the encoding of βt psq.This amounts to p2d `m `2q LWE encodings, each of which has size pn `1q log q bits8 .For the calculations, we bound m by d and state that the size of the CRS is that of p3d `2q LWE encodings.From an implementation point of view, as already stated in Section 3, we can consider LWE encodings p a, bq P Z n`1 q where the vector a is the output of a seeded PRG.Therefore, the communication complexity is greatly reduced, as sending an LWE encoding just amounts to sending the seed for the PRG and the value b P Z q .For security to hold, we can take the size of the seed to be λ bits, thus obtaining the final size of the CRS: p3d `2q log q `λ bits.The proof π is composed of 5 LWE encodings, therefore it has size |π| " 5 pn `1q log q bits.Note that in this case we cannot trivially use the same trick with the PRG, since the encodings are produced through homomorphic evaluations.
Open problems.We list here some directions for future research.First of all, the proposed approach requires a very large message space for the SNARK protocol to go through.It might be possible to soften this requirement by employing some decomposition techniques, e.g., by encoding terms bit-by-bit.We leave exploring this kind of optimizations as an interesting open problem.Another natural question that comes up is whether it is possible to have a post-quantum designatedverifier SNARK from QAP in the same spirit of Pinocchio [PHGR13].A final, more broad, open question is whether it is possible to do publicly-verifiable SNARKs from lattice assumptions.It seems difficult to achieve this without some bilinear pairing map.However, the discovery of such a map would constitute a major breakthrough in cryptography as, among the other things, it would allow for indistinguishability obfuscation [BGI `01] and multilinear maps [BS02].

Table 1 .
Security estimates for different choices of LWE parameters (circuit size fixed to d " 2 15 ), together with the corresponding sizes of the proof π and of the CRS (when using a seeded PRG for its generation).-knowledge, enabling the server to keep intermediate and additional values used in the computation private.Optimized versions of SNARK protocols based on QSPs approach are used in various practical applications, including cryptocurrencies such as Zcash [BCG `14a], zero