Home The polynomial learning with errors problem and the smearing condition
Article Open Access

The polynomial learning with errors problem and the smearing condition

  • Liljana Babinkostova EMAIL logo , Ariana Chin , Aaron Kirtland , Vladyslav Nazarchuk and Esther Plotnick
Published/Copyright: August 10, 2022
Become an author with De Gruyter Brill

Abstract

As quantum computing advances rapidly, guaranteeing the security of cryptographic protocols resistant to quantum attacks is paramount. Some leading candidate cryptosystems use the learning with errors (LWE) problem, attractive for its simplicity and hardness guaranteed by reductions from hard computational lattice problems. Its algebraic variants, ring-learning with errors (RLWE) and polynomial learning with errors (PLWE), gain efficiency over standard LWE, but their security remains to be thoroughly investigated. In this work, we consider the “smearing” condition, a condition for attacks on PLWE and RLWE introduced in Elias et al. We expand upon some questions about smearing posed by Elias et al. and show how smearing is related to the coupon collector’s problem. Furthermore, we develop an algorithm for computing probabilities related to smearing. Finally, we present a smearing-based algorithm for solving the PLWE problem.

MSC 2010: 06B05; 11T71; 81P94; 11Y16; 11Z05; 62A01

1 Introduction

Quantum computing promises to be a game-changing technology, as many problems that are considered intractable for conventional computers could be solved efficiently by harnessing properties of quantum physics to represent information. While quantum computing provides new methods to approach complex computing problems, it can also be used as a powerful tool to break existing cryptographic security.

There are currently two groundbreaking quantum algorithms that break today’s conventional cryptosystems. In 1994, Shor [19] proposed an efficient polynomial-time quantum algorithm for solving the integer factorization and discrete logarithm problems. Indeed, this algorithm breaks much of public key cryptography, as many widely used public key cryptosystems rely on the hardness of integer factorization and elliptic curve variants of the discrete logarithm problem, both of which have no known polynomial-time solution with conventional computing. In 1996, Grover [11] proposed a quantum algorithm that provides a quadratic speed up over classical algorithms for searching a key space, weakening the security of symmetric key cryptosystems that rely on the hardness of guessing a random shared key.

To address this issue, in 2016, the National Institute of Standard and Technology [17] has solicited a call for new cryptography standards. These new standards will be used as quantum-resistant counterparts to existing standards, including digital signature schemes.

A promising avenue in post-quantum cryptography is lattice-based cryptography, cryptography based on well-studied computational problems on lattices that have no known efficient solution with either classical or quantum computing [18]. Some lattice-based cryptography relies on the learning with errors (LWE) problem, introduced by Regev [14] in 2005, which exploits the hardness of solving a “noisy” linear system modulo a known integer. Regev also proved a reduction from worst-case computational lattice problems to LWE, affirming its hardness and making LWE as a strong candidate for primitive cryptographic systems.

The basic search LWE problem takes the form of a linear system hiding a secret integer vector s = ( s 1 , s 2 , ) , an integer error vector e = ( e 1 , e 2 , ) , and integer coefficient vectors a i modulo some integer q as follows:

a 1 s + e 1 c 1 mod q a 2 s + e 2 c 2 mod q a 3 s + e 3 c 3 mod q

While Gaussian elimination makes this system easy to solve with known a i , e i , and c i , the introduction of the unknown noise e i makes an easy linear system extremely difficult to solve. Even with small noise, the traditional process of Gaussian elimination magnifies noise to the point of rendering the modular linear system unsolvable.

The decision LWE problem is to distinguish with nonnegligible advantage between a uniform distribution and a distribution over the noisy inner products ( a , a s + e ) (where a is sampled uniformly at random). Since its introduction, the conjectured hardness of LWE [14] has already been used as a building block for many cryptographic applications: in efficient signature schemes [20], fully homomorphic encryption schemes [3], pseudo-random functions [2], and protocols for secure multi-party computation [5], and it also validates the hardness of the NTRU cryptosystem [12].

The “algebraically structured” variants, called ring LWE (RLWE) [16], polynomial LWE (PLWE) [13], and module LWE [1] (drawing values from any ring of integers, polynomial rings, and modules in place of the set of integers, respectively), offer more succinct representations of information.

While the hardness of RLWE relies on the hardness of computational lattice problems over a restricted set of lattices (called ideal lattices) [15], its construction has the inherent algebraic structure, which could make it vulnerable to algebraic attacks. In this article, we consider attacks against the PLWE problem.

We analyze a condition for attacks against the PLWE problem called the smearing condition, which was introduced by Elias et al. in ref. [8]. We demonstrate the parallels between the smearing condition and the coupon collector’s problem and develop recursive methods for computing the probability of smearing. We also present an algorithm on the PLWE decision problem with small parameters using the smearing condition.

This article is organized as follows: Section 2 summarizes relevant background related to the RLWE problems, Section 3 focuses on the smearing condition and gives an overview of related work, Section 4 provides methods for calculating smearing probabilities for both uniform and nonuniform distributions, and Section 5 provides a smearing-based attack on the PLWE problem.

2 Preliminaries

2.1 Lattices and Gaussians

A lattice is a discrete additive subgroup of a vector space V . If V has dimension n , a lattice L can be viewed as the set of all integer linear combinations of a set of linearly independent vectors B = { b 1 , , b k } for some k n , which is written as follows:

= L ( B ) = i = 1 k z i b i : z i Z .

If k = n , we call the lattice full-rank, and throughout this article, we only consider lattices of full-rank. We can extend this notion of lattices to matrix spaces by stacking the columns of a matrix.

We recall the following basic definitions of lattices and Gaussians. For a basic introduction to lattices, we refer the reader to ref. [6].

Definition 2.1

[6] Given a lattice in a space V endowed with a metric , the minimum distance of is defined as λ 1 ( ) = min 0 v v ( v is the shortest nonzero vector in ). Similarly, λ n is the minimum length of a set of n linearly independent vectors, where the length of a set of vectors { x 1 , , x k } is defined as max i x i .

Throughout the article, we assume that all lattices are full-rank lattices.

Definition 2.2

[6] Given a lattice V , where V is endowed with an inner product , , the dual lattice is defined = { v V : , v Z } , where v is the shortest nonzero vector in .

For a vector space V with norm and σ > 0 , we define the Gaussian function ρ σ : V ( 0 , 1 ] by

ρ σ ( x ) = e π x 2 2 σ 2 .

The Gaussian distribution (normal distribution) with parameter σ > 0 has a continuous probability density function:

f ( x ) = 1 σ 2 π e π x 2 2 σ 2 .

When sampling a Gaussian over a lattice , we will use the discrete form of the Gaussian distribution. A discrete Gaussian distribution DG σ with parameter σ > 0 over a lattice R n is a distribution in which each lattice point λ is sampled with probability P λ proportional to ρ σ ( λ ) ,

P λ ρ σ ( λ ) ρ σ ( ) , ρ σ ( ) = λ ρ σ ( λ ) .

It is well known that the sum of n independent, normally distributed random variables is normal.

A spherical Gaussian distribution is a multivariate Gaussian distribution such that there are no interactions between the dimensions.

This implies that we can simply select each coordinate from a Gaussian distribution. We use the Gaussian distribution as the error distribution in the LWE problem, as discussed later.

2.2 Learning with errors distributions

An instance of the RLWE distribution is given by a choice of number field K , secret s , prime q , and parameter ρ (for the error distribution).

Definition 2.3

(RLWE distribution, [8]) Let R = O K be the ring of integers for number field K . Define

R q = R / q R .

Let U R q be the uniform distribution over R q . Let the error distribution be G σ , R q , a discrete Gaussian distribution over R q . For some s R q , a U R q , e G σ , R q , pairs of the form ( a , c = a s + e ) compose the RLWE distribution D R , s , G σ over R q × R q .

The PLWE distribution is defined similarly; rather than the ring of integers of a number field, the distribution is defined over a polynomial ring. An instance of the PLWE distribution is now given by a choice of monic, irreducible polynomial f ( x ) Z [ x ] , secret s , prime q , and parameter σ (for the error distribution).

Definition 2.4

(PLWE distribution, [8]). Let f ( x ) Z [ x ] be monic, irreducible of degree n . Assume that f ( x ) splits over Z q Z / q Z . Define

P Z [ x ] / ( f ( x ) ) , P q P / q P .

Let G σ , P be a discrete Gaussian distribution over P spherical in the power basis of P ( 1 , x , x 2 , , x n 1 ) . Let U P q be the uniform distribution over P q . Let the error distribution be G σ , P q a discrete Gaussian distribution over P q .

For some s P q , a U P q , and e G σ , P q , pairs of the form

( a , c = a s + e )

compose the PLWE distribution D P , s , G σ over P q × P q .

The decision problems for RLWE and PLWE are analogous to decision LWE: given the same number of (arbitrarily many) independent samples from two distributions, determine with nonnegligible advantage that the set of samples follows the RLWE (PLWE) distribution D R , s , G σ ( D P , s , G σ ) versus a uniform distribution over R q × R q ( P q × P q ).

3 Smearing condition

3.1 Motivation and related work

A common technique for breaking cryptographic schemes is to transfer the problem onto a smaller space, where looking for the secret key by brute force is feasible. In finding the secret s P q in a PLWE problem by brute force, the attacker would have to go through q n different possibilities, which is infeasible due to the sizes of q and n . However, if the attacker can somehow transfer the PLWE problem onto a smaller field, like Z q , then brute force becomes feasible, and, if not much information is lost in this transformation, then a brute search on Z q would help solve the original problem on P q .

An example of this approach is the “ γ = 1 attack” on decision-PLWE, as presented in ref. [8]. Suppose that f has a root at γ = 1 , i.e., f ( 1 ) = 0 mod q . Expressing e in the power basis, e ( x ) = j = 0 n 1 e j x j , where e j G σ . Then, e ( 1 ) = j = 0 n 1 e j G n σ . So, if samples follow the PLWE distribution, e ( 1 ) can only take on a small range of values.

Note that there are q possibilities for the value of s ( 1 ) . So,

  1. For all possible guesses g Z q , where g is a guess for s ( 1 ) , and for each sample ( a i , c i ) , compute e i = c i ( 1 ) g a i ( 1 ) .

    1. Check and record if e i is within G n σ .

      Note: If the guess for s ( 1 ) is correct, e i will equal e i ( 1 ) . If the guess for s ( 1 ) is incorrect, or if ( a i , c i ) are uniform to begin with, e i will be uniform over Z q .

  2. Make a decision about the sample distribution:

    1. If there is one g for which all the c i ( 1 ) g a i ( 1 ) ’s are within G n σ , then ( a i , c i ) are taken from the PLWE distribution with g = s ( 1 ) .

    2. If all possible g values give uniform distributions of c i ( 1 ) g a i ( 1 ) , then ( a i , c i ) are taken from the uniform distribution.

    3. If several g appear to work, then repeat the algorithm with more samples.

This attack works with probability P > 1 2 [8].

Similar attacks exist for γ of small order, where γ r = 1 for some small r Z + , and are explained further in ref. [8]. Other attacks include exploitation of the size of the error values. However, the probability of success for this particular attack decays (except for in a special case) and is unlikely to be implemented [7].

Definition 3.1

[8] Let f ( x ) Z [ x ] be a monic, irreducible polynomial of degree n and let γ be a root of f ( x ) mod q . Then a smearing map π γ is defined as follows:

π γ : P q Z q g ( x ) g ( γ ) .

Definition 3.2

[8] Given a smearing map π γ and a subset S P q , we say that S smears under π γ if π γ ( S ) = Z q .

Note that for a smearing map π γ , we have ker π γ = ( x γ ) .

Also, note that P q ( x γ ) Z q , where P q is a polynomial ring and γ is one of its roots. This implies that ( x γ ) has q cosets in P q , which are, consequently, { i + ( x γ ) } i = 0 q 1 .

Lemma 3.3

Let π γ be a smearing map. Then, π γ ( f ) = π γ ( g ) if and only if f and g are in the same coset of ( x γ ) .

Proof

The claim follows from the fact that

f ( γ ) = g ( γ ) ( f g ) ( γ ) = 0 f g = h ( x γ ) for some h P q f , g are in the same coset of ( x γ ) .

This lemma implies that the set S smears if and only if S contains an element in each of the q cosets of ( x γ ) in P q . In the next two sections, we investigate the size of a subset sampled from a uniform distribution and investigate the properties of a subset sampled from a Gaussian distribution as in the PLWE problem.

3.2 Smearing: the uniform distribution case

We investigate the size of a subset sampled from a uniform distribution, i.e., the polynomials in S are chosen uniformly random over P q . As we will see, the assumption of uniformity eliminates much of the algebraic aspects of the smearing problem as related to PLWE and reduces the problem to a classic problem in probability theory, the coupon collector’s problem.

The classical version of the coupon collector’s problem is as follows: Suppose a company places one of q distinct types of coupons, c 1 , , c q , into each of its cereal boxes independently, with equal probability 1 q . Let X be a random variable indicating the number of objects one has to buy before collecting all of the coupons. The question then is:

How many boxes should one expect to buy before collecting at least one of each type of coupon? Equivalently, what is E [ X ] ?

The following is a well known lemma, which computes E [ X ] with a geometric distribution approach.

Lemma 3.4

[10] Let X be the number of boxes needed to be purchased to collect all q coupons. Then,

E [ X ] = q H q = q log q + γ q + 1 2 + O ( 1 / q )

where H q is the qth harmonic number and γ 0.57722 is the Euler–Mascheroni constant. Furthermore,

Var ( X ) < π 2 6 q 2 .

We can reduce the problem of uniform smearing to the Coupon Collector’s Problem.

Lemma 3.5

A uniform distribution over P q maps under π γ to a uniform distribution over Z q .

Proof

By Lemma 3.3, and the fact that all cosets of ( x γ ) are of the same size, a polynomial chosen uniformly at random in P q will have the probability 1 / q of being in any given coset of ( x γ ) , and hence, there is a 1 / q probability that π γ produces any given element in Z q .□

So, instead of selecting polynomials in P q , we can choose elements of Z q uniformly. In this context, the smearing problem is identical to the Coupon Collector’s Problem. Each polynomial has an image uniformly chosen between 1 and q , and we want to “collect all the coupons,” i.e., for each element j Z q , collect at least one polynomial having j = π γ as its image under the smearing map.

3.2.1 The main question

Let m = S and Z q be given. Assume that the elements of S are chosen uniformly at random ( s S , s U P q ). The question is to determine the probability that S P q will smear.

Remark 3.6

Note that, although in the smearing problem q must be prime, in the broader context of the coupon collector’s problem, q can be any positive integer; we will not demand such a restriction within our probabilistic calculations.

Fix a polynomial f ( x ) Z [ x ] and thus fix some root γ Z q and the smearing map π γ . Denote with P ( m , q ) , the probability that a subset S P q of size m smears.

Remark 3.7

Given a probability distribution on X (the random variable from the coupon collector’s problem representing the number of cereal boxes one must purchase before collecting all q coupons), P ( m , q ) is simply the cumulative distribution function of X , since P ( S smears ) = P ( X S = m ) .

In Section 4, we provide computations and approximations of the smearing probabilities.

3.3 Smearing: the nonuniform case

In this section, we investigate the smearing condition when the error distribution over P q is not uniform. Note first that we can view drawing from D P , s , σ

( a , a s + e ) , a U P q , s P q , e G σ , P q

as simply drawing

( a , e ) , a U P q , e G σ , P q

since multiplying an element a selected uniformly at random by a fixed secret s is the same as selecting a s uniformly at random (provided that s is invertible), which, when the Gaussian distribution is added, yields the same Gaussian distribution for e . When we discuss the mapped error distribution, we consider selecting e G σ , P q and its mapping π γ ( e ) .

3.3.1 The distribution of e ( α )

An explicit method of calculating the probability distribution of e ( α ) given the distribution of the polynomial coefficients of e is presented in ref. [4].

Theorem 3.8

[4] Suppose e 0 , e 1 , , e n 1 are independent random variables in Z q with the same probability distribution ( c 0 , c 1 , , c q 1 ) . Let c ( x ) = i Z q c i x i . Then, for any a 1 , , a n 1 Z q , the probability distribution of e 0 + e 1 a 1 + + e n 1 a n 1 mod q can be computed as the coefficients of the polynomial (Figures 1 and 2)

c ( x ) c ( x a 1 ) c ( x a n 1 ) mod x q 1 .

Since e ( α ) = e 0 + e 1 α + e 2 α 2 + + e n 1 α n 1 , by setting a i = α i , 1 i n 1 and using the theorem mentioned earlier, we can compute the probability distribution of e ( α ) over Z q . In general, we refer to that distribution as χ , the “mapped error distribution.”

We can compute the discrete Gaussian distribution over Z q = Z / q Z as in ref. [4]. The parameter used is θ = σ 2 π / q ; it is called β in that article.

Figure 1 
                     Initial discrete Gaussian distributions (left: over 
                           
                              
                              
                                 
                                    
                                       Z
                                    
                                    
                                       53
                                    
                                 
                              
                              {{\mathbb{Z}}}_{53}
                           
                         with 
                           
                              
                              
                                 θ
                                 =
                                 0.3
                              
                              \theta =0.3
                           
                        ; right: over 
                           
                              
                              
                                 
                                    
                                       Z
                                    
                                    
                                       101
                                    
                                 
                              
                              {{\mathbb{Z}}}_{101}
                           
                         with 
                           
                              
                              
                                 θ
                                 =
                                 0.02
                              
                              \theta =0.02
                           
                        ).
Figure 1

Initial discrete Gaussian distributions (left: over Z 53 with θ = 0.3 ; right: over Z 101 with θ = 0.02 ).

Figure 2 
                     Mapped error distributions from discrete gaussian distributions in Figure 1 (left: with 
                           
                              
                              
                                 γ
                                 =
                                 2
                              
                              \gamma =2
                           
                         where 
                           
                              
                              
                                 
                                    
                                       x
                                    
                                    
                                       2
                                    
                                 
                                 +
                                 49
                              
                              {x}^{2}+49
                           
                         is irreducible over 
                           
                              
                              
                                 Z
                              
                              {\mathbb{Z}}
                           
                        , has 
                           
                              
                              
                                 γ
                              
                              \gamma 
                           
                         as a root over 
                           
                              
                              
                                 
                                    
                                       Z
                                    
                                    
                                       53
                                    
                                 
                              
                              {{\mathbb{Z}}}_{53}
                           
                        , and 
                           
                              
                              
                                 γ
                              
                              \gamma 
                           
                         has order 52 in 
                           
                              
                              
                                 
                                    
                                       Z
                                    
                                    
                                       53
                                    
                                 
                              
                              {{\mathbb{Z}}}_{53}
                           
                        ; right: with 
                           
                              
                              
                                 γ
                                 =
                                 20
                              
                              \gamma =20
                           
                         where 
                           
                              
                              
                                 
                                    
                                       x
                                    
                                    
                                       5
                                    
                                 
                                 −
                                 6
                                 x
                                 +
                                 2
                              
                              {x}^{5}-6x+2
                           
                         is irreducible over 
                           
                              
                              
                                 Z
                              
                              {\mathbb{Z}}
                           
                        , has 
                           
                              
                              
                                 γ
                              
                              \gamma 
                           
                         as a root over 
                           
                              
                              
                                 
                                    
                                       Z
                                    
                                    
                                       101
                                    
                                 
                              
                              {{\mathbb{Z}}}_{101}
                           
                        , and 
                           
                              
                              
                                 γ
                              
                              \gamma 
                           
                         has order 50 in 
                           
                              
                              
                                 
                                    
                                       Z
                                    
                                    
                                       101
                                    
                                 
                              
                              {{\mathbb{Z}}}_{101}
                           
                        ).
Figure 2

Mapped error distributions from discrete gaussian distributions in Figure 1 (left: with γ = 2 where x 2 + 49 is irreducible over Z , has γ as a root over Z 53 , and γ has order 52 in Z 53 ; right: with γ = 20 where x 5 6 x + 2 is irreducible over Z , has γ as a root over Z 101 , and γ has order 50 in Z 101 ).

4 Computing smearing probabilities

Let χ be a discrete probability distribution on [ q ] (where [ q ] denotes the set { 1 , 2 , q } ), and let P χ ( m , q ) denote the probability that, when m samples are independently drawn from χ , they will “smear,” i.e., each element in [ q ] will be chosen at least once. When χ is the uniform distribution, we denote this probability as P U ( m , q ) , or simply as P ( m , q ) . In this section, we provide practical ways of calculating these probabilities.

4.1 An approximation of uniform smearing probabilities

A result by Erdös and Rényi [9] gives a way to approximate P ( m , q ) for large values of q .

Theorem 4.1

(Erdös and Rényi [9]) Let U be the uniform distribution over [ q ] , and let X be the random variable denoting the number of independent samples one must take from U until picking each of [ q ] at least once. Then,

lim q Pr ( X < q log q + c q ) = exp ( exp ( c ) ) .

In our case, P ( m , q ) = Pr ( X m ) , so making the substitution m = q log q + c q gives the formula

P ( m , q ) exp q exp m q .

Although this is a powerful approximation, for some applications, it might be preferable to calculate this probability exactly for concrete values of m and q . The following sections contribute towards this goal, as well as give formulas for the case when χ is not uniform.

4.2 A recursive formula in m

Proposition 4.2

Let χ be a discrete probability distribution on [ q ] , with p k being the probability of picking the k th element. Let χ / k denote the probability distribution on q 1 elements after the k th element has been removed from χ , and the remaining probabilities have been normalized. Then,

P χ ( m , q ) = P χ ( m 1 , q ) + k = 1 q p k ( 1 p k ) m 1 P χ / k ( m 1 , q 1 ) .

Proof

Assume that we choose m independent samples one by one from the distribution χ . Let S be the event that the samples smear. Let A be the event that the m th sample achieves smearing (i.e., the previous m 1 samples cover q 1 distinct elements, and the m th sample happens to cover the remaining q th element). Also, let B be the event that smearing happens within the first m 1 samples (i.e., by the time m 1 samples have been taken, they already take on q distinct values). Notice that S = A B . Therefore,

P χ ( m , q ) = Pr ( S ) = Pr ( A ) + Pr ( B ) .

To calculate P ( A ) , we use the law of total probability to condition on the outcome of the m th sample, which we denote by K [ q ] :

Pr ( A ) = k = 1 q Pr ( A K = k ) Pr ( K = k ) = k = 1 q Pr ( A K = k ) p k .

To calculate the value of Pr ( A K = k ) , we notice that the only way that smearing is achieved by the m th sample being equal to k is if, first, the previous m 1 samples all fall into [ q ] / k , and, second, if the previous m 1 samples smear on [ q ] / k . Therefore,

Pr ( A K = k ) = ( 1 p k ) m 1 P χ / k ( m 1 , q 1 ) ,

where ( 1 p k ) m 1 is the probability that the first m 1 samples are contained in [ q ] / k , and P χ / k ( m 1 , q 1 ) is the probability that, conditioned on this, these m 1 samples smear on [ q ] / k . Hence,

Pr ( A ) = k = 1 q p k ( 1 p k ) m 1 P χ / k ( m 1 , q 1 ) .

On the other hand, the probability of event B , or that smearing is achieved within the first m 1 samples, is simply P χ ( m 1 , q ) , so

P χ ( m , q ) = P χ ( m 1 , q ) + k = 1 q p k ( 1 p k ) m 1 P χ / k ( m 1 , q 1 ) .

In the case where χ is the uniform distribution on [ q ] , this relation becomes greatly simplified:

Lemma 4.3

For the uniform distribution on [ q ] ,

P ( m , q ) = P ( m 1 , q ) + P ( m 1 , q 1 ) q 1 q m 1 .

Proof

We use the result of Proposition 4.2. In the uniform distribution, p k = 1 q for every k . Furthermore, for every k , χ / k is the uniform distribution on [ q 1 ] , so P χ / k ( m 1 , q 1 ) = P ( m 1 , q 1 ) . Therefore, if χ is the uniform distribution, then

P χ ( m , q ) = P χ ( m 1 , q ) + k = 1 q p k ( 1 p k ) m 1 P χ / k ( m 1 , q 1 ) = P ( m 1 , q ) + k = 1 q 1 q q 1 q m 1 P ( m 1 , q 1 ) = P ( m 1 , q ) + P ( m 1 , q 1 ) q 1 q m 1 .

Lemma 4.3, if implemented as a recursive formula, provides a very rapid method of computing P ( m , q ) . The base cases are rather straightforward. If m < q , then

P ( m , q ) = 0 ,

since it is impossible to pick q different elements with fewer than q samples. If m = q , then

P ( m , q ) = q ! q q .

To see why this is the case, notice that if the number of samples is equal to q , then every single sample must be a “success,” i.e., pick an element of [ q ] that has not been picked before. For the first sample, one can pick any of [ q ] , so the probability of success is q / q . For the second sample, there are q 1 unpicked elements, so the probability of success is ( q 1 ) / q . For the third sample, there are now q 2 elements that are not selected, so the probability of success is now ( q 2 ) / q . This continues until the q th sample, for which there is only one option left, giving a success probability of 1 / q . Multiplying these probabilities together gives q ! / q q . Finally, if q = 1 , and m > 0 , then P ( m , q ) = 1 , as the first sample is always a success, and one success is sufficient in this case.

Computing P ( m , q ) using the recursive formula of Lemma 4.3 along with these base cases results in the computation of P ( i , j ) for each of 1 j q and j i j + ( m q ) . Hence, the complexity of the recursive computation is on the order of q ( m q ) . Notice, however, that as a result, one computes not only P ( m , q ) but also P ( i , q ) for all i [ 0 , m ] , which is very useful information to have for choosing parameters for the smearing attack (which will be discussed later).

Figure 3 shows P ( m , q ) values for a range of m and q , calculated using this method.

Figure 3 
                  
                     
                        
                           
                           
                              P
                              
                                 (
                                 
                                    m
                                    ,
                                    q
                                 
                                 )
                              
                           
                           P\left(m,q)
                        
                     , 
                        
                           
                           
                              1
                              ≤
                              m
                              ≤
                              400
                           
                           1\le m\le 400
                        
                      and 
                        
                           
                           
                              1
                              ≤
                              q
                              ≤
                              53
                           
                           1\le q\le 53
                        
                     .
Figure 3

P ( m , q ) , 1 m 400 and 1 q 53 .

While Lemma 4.3 provides an effective method of calculating smearing probabilities for uniform distributions, using Proposition 4.2 to calculate smearing probabilities for nonuniform distributions is inefficient, since, to calculate P χ ( m , q ) , one must calculate smearing probabilities on all subsets of [ q ] , which makes the complexity on the order of 2 q ( m q ) . A more efficient method for nonuniform smearing can be achieved by recursion in q , rather than recursion in m , as described in the next section.

4.3 A recursive formula in q

Proposition 4.4

Let χ be a discrete probability distribution on [ q ] , with p q being the probability of picking the q th element. Let χ / q denote the probability distribution on q 1 elements after the q th element has been removed from χ , and the remaining probabilities have been normalized. Then,

P χ ( m , q ) = k = 1 m q + 1 m k p q k ( 1 p q ) m k P χ / q ( m k , q 1 ) .

Proof

Let K be a random variable denoting the number of times the q th element is picked. For smearing to occur, K must be at least 1 (else the q th element will not be chosen), but cannot be greater than m q + 1 . This is because there are m samples in total, and one needs at least q 1 of them to cover the first q 1 elements, leaving a maximum of m ( q 1 ) available for the q th element. Then, by the law of total probability,

P χ ( m , q ) = k = 1 m q + 1 Pr ( K = k ) Pr ( smearing K = k ) .

Since the samples are drawn independently, notice that K Bin ( m , p q ) . Therefore,

Pr ( K = k ) = m k p q k ( 1 p q ) m k .

On the other hand, the probability that smearing occurs given that the q th element is chosen k times, where k [ 1 , m q + 1 ] , is the probability that m k samples, taken from χ / q , smear on [ q 1 ] . Hence,

Pr ( smearing K = k ) = P χ / q ( m k , q 1 ) .

Finally,

P χ ( m , q ) = k = 1 m q + 1 m k p q k ( 1 p q ) m k P χ / q ( m k , q 1 ) .

Using Proposition 4.4 as a basis for a recursive method of calculating P χ ( m , q ) is more efficient than using Proposition 4.2. The base cases are very similar to those in Section 4.2. If m < q , then P χ ( m , q ) = 0 , and if q = 1 and m > 0 , then P χ ( m , q ) = 1 , for the same reasons as in the uniform case. To find P χ ( q , q ) , notice that, as in the uniform case, each sample must pick an element of [ q ] , which has not been picked earlier. Hence, if q samples smear, they must be a permutation of [ q ] . Each such permutation has a probability of k = 1 q p k , where p k is the probability of picking the k th element, and there are q ! such permutations, meaning the probability of smearing is

P χ ( q , q ) = q ! k = 1 q p k .

As expected, when χ is uniform, p k = 1 / q for every k , so the formula simplifies to the one in Section 4.2.

Computing P χ ( m , q ) recursively using Proposition 4.4 along with these base cases results in the computation of P χ / [ j + 1 , q ] ( i , j ) for each of 1 j q and j i j + ( m q ) . In turn, the computation of each of these values requires a sum of on the order of m q terms. Hence, the complexity of this recursive method is on the order of q ( m q ) 2 . Notice that, as in the recursion-in- m method, this recursion-in- q method results in the calculation of not only P ( m , q ) but also P χ ( i , q ) for all i [ 0 , m ] . Of course, an attacker on PLWE would not have prior knowledge of the nonuniform distribution χ , but such information is nevertheless useful in a retrospective analysis of the effectiveness of the smearing attack (discussed later).

4.4 Code repository

The code for generating the figures in this article, including the mapped error distributions and P χ ( m , q ) curves is available on GitHub at https://github.com/atkirtland/rlwe-smearing. The computational results in this article were produced on a single machine. Due to the time complexity of the current implementation of the algorithm (approximately quadratic in m and approximately linear in q ) and computational resource constraints, we were able to produce computational results only for small values of the parameters.

5 Smearing-based algorithm for solving PLWE with small parameters

Here, we build on the previous sections to present a smearing-based algorithm for solving the PLWE problem.

5.1 The uniform distribution smears the best

A fundamental principle regarding uniform and nonuniform smearing is as follows. We begin with the following lemma.

Lemma 5.1

Let χ be a distribution on [ q ] , and let i j be two elements of [ q ] . Let p i and p j be the probabilities of selecting i and j , respectively. Construct χ as follows: take χ , and replace the probabilities of i and j with ( p i + p j ) / 2 . Then,

P χ ( m , q ) P χ ( m , q )

with equality if and only if p i = p j .

Proof

Define K as the random variable representing the number of samples from the distribution that fall into { i , j } . Notice that K Bin ( m , p i + p j ) , for both χ and χ . Let k be a specific instance of K , k 2 . Conditioned on K = k , smearing on { i , j } is independent from smearing on [ q ] / { i , j } , and since the probabilities on [ q ] / { i , j } are unchanged between χ and χ , to compare P χ ( m , q ) and P χ ( m , q ) , it suffices to compare the probabilities of picking at least one of each of i and j for both distributions, restricted to what is going on with these k samples. For χ , this probability is

Pr ( picking both i and j K = k ) = 1 Pr ( not picking both i and j K = k ) = 1 ( Pr ( picking only i K = k ) + Pr ( picking only j K = k ) ) = 1 p i p i + p j k + p j p i + p j k = 1 p i k + p j k ( p i + p j ) k ,

since the probability of picking i out of { i , j } is p i / ( p i + p j ) and similarly with j . For χ , a similar computation shows that

Pr ( picking both i and j K = k ) = 1 p i + p j 2 k + p i + p j 2 k p i + p j 2 + p i + p j 2 k = 1 p i + p j 2 k + p i + p j 2 k ( p i + p j ) k .

It remains, thus, to show that

p i k + p j k p i + p j 2 k + p i + p j 2 k ,

with equality if and only if p i = p j . To show this, consider this as an optimization problem, where we try to minimize the quantity f ( P i , P j ) = P i k + P j k subject to the constraint g ( P i , P j ) = p i + p j . By the method of Lagrangian multipliers, setting f = λ g gives

k P i k 1 = k P j k 1 = λ ,

which implies that P i k + P j k is minimized at P i = P j = ( p i + p j ) / 2 . Hence,

p i k + p j k p i + p j 2 k + p i + p j 2 k ,

with equality if and only if p i = p j . This implies that

P χ ( m , q K = k ) P χ ( m , q K = k )

for k 2 . Then, by the law of total probability,

P χ ( m , q ) P χ ( m , q ) = k = 2 m q + 2 P χ ( m , q K = k ) Pr ( K = k ) k = 2 m q + 2 P χ ( m , q K = k ) Pr ( K = k ) = k = 2 m q + 2 ( P χ ( m , q K = k ) P χ ( m , q K = k ) ) Pr ( K = k ) k = 2 m q + 2 0 Pr ( K = k ) = 0 .

Therefore,

P χ ( m , q ) P χ ( m , q ) ,

with equality if and only if p i = p j , as seen from the optimization problem.□

Theorem 5.2

Let χ be a probability distribution over [ q ] , and let U be the uniform distribution over [ q ] . Then,

P χ ( m , q ) P U ( m , q ) ,

with equality if and only if χ = U .

Proof

From χ = χ 0 , build a new distribution χ N + 1 by selecting two elements in [ q ] from the previous distribution χ N and replacing their two probabilities p i and p j with ( p i + p j ) / 2 , their average. By Lemma 5.1,

P χ N + 1 ( m , q ) P χ N ( m , q ) .

We construct a sequence { χ N } such that { P χ N ( m , q ) } N = 0 is a nondecreasing, infinite sequence with limit P U ( m , q ) . This shows that P χ ( m , q ) P U ( m , q ) . Furthermore, if P χ ( m , q ) = P U ( m , q ) , then the sequence is constant, meaning P χ N ( m , q ) = P χ N + 1 ( m , q ) for all N . By Lemma 5.1, this is only possible if, for each step, p i = p j , meaning that χ 0 = χ 1 = = χ N = . Then, { χ N } N = 0 is a constant sequence with limit U , hence χ = U .□

This principle is the driving force behind the algorithm for solving the decision-PLWE problem (for small parameters), as described in the following sections.

5.2 The smearing decision problem

The foundation of the smearing attack is in what we call the “smearing decision”: given a large number of samples from some probability distribution over Z q , decide, with some certainty, whether that distribution is the uniform distribution U or a certain nonuniform distribution χ . We do this in the following way.

  1. Choose the parameters N , indicating the number of trials to be done, and m , the number of samples to be taken per trial. N must be odd, while m must be picked such that P U ( m , q ) > 1 / 2 while P χ ( m , q ) < 1 / 2 . Since P U ( m , q ) > P χ ( m , q ) for all m q and both P U ( m , q ) and P χ ( m , q ) , as functions of m , have range [ 0 , 1 ) , such an m exists almost always.

  2. For each trial, take m samples, and check whether they smear on Z q . If smearing happens for more than half of the N trials, conclude that the samples were taken from the uniform distribution over Z q . If, on the other hand, the smearing happens for less than half of the N trials, conclude that the samples were taken from χ .

To give an intuitive explanation of this decision process, consider the two graphs shown in Figures 4 and 5. Figure 4 shows the smearing probability for uniform distributions (in blue), and nonuniform distributions (in orange) as a function of m . Recall from Figure 2 that the nonuniform distributions are mapped Gaussian distributions.

Figure 4 
                  Probability of smearing for the uniform distribution (
                        
                           
                           
                              
                                 
                                    P
                                 
                                 
                                    U
                                 
                              
                              
                                 (
                                 
                                    m
                                    ,
                                    q
                                 
                                 )
                              
                           
                           {P}_{U}\left(m,q)
                        
                     , blue) vs a nonuniform distribution (
                        
                           
                           
                              
                                 
                                    P
                                 
                                 
                                    χ
                                 
                              
                              
                                 (
                                 
                                    m
                                    ,
                                    q
                                 
                                 )
                              
                           
                           {P}_{\chi }\left(m,q)
                        
                     , orange) with parameters as in Figure 2 as a function of 
                        
                           
                           
                              m
                           
                           m
                        
                      and 
                        
                           
                           
                              q
                              =
                              53
                           
                           q=53
                        
                     .
Figure 4

Probability of smearing for the uniform distribution ( P U ( m , q ) , blue) vs a nonuniform distribution ( P χ ( m , q ) , orange) with parameters as in Figure 2 as a function of m and q = 53 .

Figure 5 
                  Probability of deciding correctly between the two distributions by whether smearing happens, in the case 
                        
                           
                           
                              q
                              =
                              53
                           
                           q=53
                        
                      with parameters mentioned earlier.
Figure 5

Probability of deciding correctly between the two distributions by whether smearing happens, in the case q = 53 with parameters mentioned earlier.

As expected, for both curves, when the number of samples is small, the probability of smearing is 0 (or close to 0); while when the number of samples if large, smearing occurs almost always. The uniform and nonuniform curves can really be differentiated only for some intermediate range of m .

We describe here a simple example of the smearing decision and the smearing-based algorithm in general. Suppose that U or χ have equal probability.

Assume that if smearing happens, the distribution is assumed to be uniform, and if smearing does not happen, the distribution is assumed to be nonuniform. Then, the probability that the decision is correct is expressed as follows:

Pr ( decision is correct ) = Pr ( decision is correct U ) Pr ( U ) + Pr ( decision is correct χ ) Pr ( χ ) = Pr ( smearing happens U ) 1 2 + Pr ( smearing doesn’t happen χ ) 1 2 = 1 2 ( P U ( m , q ) + ( 1 P χ ( m , q ) ) ) = 1 2 + 1 2 ( P U ( m , q ) P χ ( m , q ) ) .

Notice that since P U ( m , q ) > P χ ( m , q ) , the probability is strictly greater than half and increases linearly with the difference in the smearing probabilities of the uniform and nonuniform distributions. The graph of the probabilities for different values of m is shown in Figure 5. As expected, the probability is highest when the distance between the two smearing probability curves is the greatest.

The following proposition formalizes the idea of the smearing decision.

Proposition 5.3

Let U be the uniform distribution over Z q , and χ be some nonuniform distribution over Z q . Let m be an integer such that P U ( m , q ) > 1 / 2 and P χ ( m , q ) < 1 / 2 . Then, given arbitrarily small α , β > 0 , if

N > max 2 log 1 α 1 1 2 P U ( m , q ) 2 P U ( m , q ) , 2 log 1 β 1 2 P χ ( m , q ) 1 2 P χ ( m , q ) ,

then the smearing decision with N trials is correct, in the case where the true distribution is U, with probability greater than 1 α , and in the case where the true distribution is χ , with probability greater than 1 β .

Proof

Consider the case in which the unknown distribution about which the decision is being made is actually U . Define X to be a random variable denoting the number of trials for which the samples smear. In this case, for each trial, smearing happens with probability P U ( m , q ) (which we denote as simply P U for convenience), and the trials are independent from one another. Hence, X Bin ( N , P U ) , so E ( X ) = N P U . In this case, the probability that the smearing decision is incorrect is the probability that fewer than N / 2 trials smear. Using the Chernoff bound, which states that for a binomially distributed random variable X , and any 0 δ 1 ,

Pr ( X < ( 1 δ ) E ( X ) ) exp δ 2 E ( X ) 2 ,

we conclude that

Pr ( decision is incorrect U ) = Pr X < N 2 = Pr X < 1 1 1 2 P U N P U exp 1 1 2 P U 2 N P U 2 .

Hence, to show that the smearing decision is incorrect, in the case where the true distribution is U , with probability less than α , it is sufficient to show that

exp 1 1 2 P U 2 N P U 2 < α ,

which is equivalent to

N > 2 log 1 α 1 1 2 P U 2 P U ,

which is true by the assumptions of the proposition. On the other hand, in the case where the unknown distribution is χ , X Bin ( N , P χ ) (where we denote P χ ( m , q ) by P χ for convenience), the decision is incorrect whenever smearing happens in more than N / 2 trials. Because the binomial distribution is symmetric about the mean, we know that

Pr X > N 2 = Pr X < 2 P χ 1 2 N .

Then, by a similar argument as mentioned earlier,

Pr ( decision is incorrect χ ) = Pr X > N 2 = Pr X < 2 P χ 1 2 N = Pr X < 1 1 2 P χ 1 N P χ exp 1 2 P χ 1 2 N P χ 2 .

Hence, to show that the smearing decision is incorrect, in the case where the true distribution is χ , with probability less than β , it is sufficient to show that

exp 1 2 P χ 1 2 N P χ 2 < β ,

which is equivalent to

N > 2 log 1 β 1 2 P χ 1 2 P χ ,

which is true by the assumptions of the proposition.□

5.3 The smearing algorithm

The proposed smearing algorithm for solving the decision-PLWE problem proceeds as follows. Assume that we know P q (and a root γ of the polynomial f ( x ) ) and have access to a large number of samples ( a i , b i ) P q × P q .

Algorithm. The smearing-based algorithm works as follows:

  1. Choose parameters m and N to achieve the desired error probabilities α and β .

  2. For all possible guesses g Z q for the true value of s ( γ ) , make the following smearing decision: for each of N trials, draw m samples ( a i , b i ) , compute the set S = { b i ( γ ) a i ( γ ) g } and determine whether S smears, i.e., S = Z q .

    1. If smearing occurs in more than half of the trials ( > N / 2 trials), conclude that the error distribution resulting from the guess g is uniform.

    2. Else conclude that the error distribution from the guess g is nonuniform (in fact, it is the mapped Gaussian distribution).

  3. Make a decision about the sample distribution:

    1. If the error distribution is uniform for all values of g Z q , conclude that the samples ( a i , b i ) originally came from a uniform distribution.

    2. Else, if the error distribution is uniform for all but one values of g , conclude that the samples ( a i , b i ) originally came from the PLWE distribution. In this case, the value of g which gives a nonuniform error distribution is likely to be the true value of the secret s evaluated at γ .

    3. If the error distribution is nonuniform for more than one value of g , choose better values for the parameters m and N and repeat the steps of the algorithm.

If ( a i , b i ) actually came from the uniform distribution over P q × P q , then, for any value of g Z q , the values of b i ( γ ) a i ( γ ) g would also be uniformly distributed over Z q . If, on the other hand, ( a i , b i ) came from the PLWE distribution, but g was an incorrect guess for s ( γ ) , then b i ( γ ) a i ( γ ) g values would also be uniformly distributed over Z q . Finally, if ( a i , b i ) came from the PLWE distribution, and g was the correct guess for s ( γ ) , only in this case would the b i ( γ ) a i ( γ ) g values follow the true distribution of the PLWE error term e when mapped to Z q by evaluating e at x = γ .

For well-chosen parameters m and N , the smearing attack described earlier correctly distinguishes between the PLWE and the uniform distributions over P q × P q almost always, as formalized the following proposition.

Proposition 5.4

Let α and β be, respectively, the type 1 and type 2 errors of the smearing decision over Z q . Then, if the true distribution of the samples ( a i , b i ) is the uniform distribution over P q × P q , the smearing attack gives the correct decision with probability

1 α 1 + ( q 1 ) α .

If the true distribution of the samples ( a i , b i ) is the PLWE distribution over P q × P q , the smearing attack gives the correct decision with probability

1 α β + q α β 1 α + ( q 1 ) α β .

Proof

First, assume that the original distribution over P q × P q is uniform. Then, we are concerned with only two outcomes: that all q decisions indicate uniform distribution or that one of the q decisions indicates nonuniform distribution, while the rest indicates uniform.

In this case, the probability that all q decisions indicate that the error distribution is uniform is the probability that all the smearing decisions are correct, which is ( 1 α ) q . On the other hand, the probability of one of the decisions indicating that the error distribution is nonuniform, while the rest indicating uniform is the probability that one of the smearing decisions is incorrect, while the rest are correct, which is q α ( 1 α ) q 1 . In the first outcome, the attack would indicate that the distribution is uniform, while in the second outcome, the attack would indicate that the distribution is the PLWE distribution. Thus, the probability that in this case the attack is successful is

( 1 α ) q ( 1 α ) q + q α ( 1 α ) q 1 = 1 α 1 + ( q 1 ) α .

On the other hand, assume that the original distribution over P q × P q is the PLWE distribution. In this case, for q 1 of the guesses for g , the error distribution will be uniform, while for the correct guess, the error distribution will be the mapped Gaussian distribution. Correspondingly, there are three possible outcomes:

  1. All q of the smearing decisions could indicate that the error distribution over Z q is uniform, which means that q 1 of the smearing decisions were correct with choosing the uniform distribution, while the smearing decision corresponding to the correct guess was incorrect in choosing the uniform distribution. The probability of this outcome is ( 1 α ) q 1 β .

  2. The q 1 of the smearing decisions could indicate that the error distribution is uniform, while the smearing decision corresponding to the correct guess could indicate that the error distribution is nonuniform. This means all smearing decisions were done correctly, the probability of which is ( 1 α ) q 1 ( 1 β ) .

  3. The q 1 of the smearing decisions could indicate that the error distribution is uniform, while one smearing decision could indicate that the error distribution is nonuniform, but that smearing decision is the one which corresponds to a wrong guess g . In this case, q 2 of the smearing decisions need to identify a uniform distribution correctly, the one corresponding to the right guess g needs to fail in identifying a nonuniform distribution, and one smearing decision corresponding to a wrong guess for g needs to fail in identifying a uniform distribution. The probability of this happening is ( q 1 ) ( 1 α ) q 2 α β .

The first outcome gives an incorrect smearing decision, while the second and third would give the correct smearing decision. Therefore, the probability that the attack is successful if the original distribution is the PLWE distribution is

( 1 α ) q 1 ( 1 β ) + ( q 1 ) ( 1 α ) q 1 α β ( 1 α ) q 1 β + ( 1 α ) q 1 ( 1 β ) + ( q 1 ) ( 1 α ) q 2 α β = 1 α β + q α β 1 α + ( q 1 ) α β .

6 Conclusion and future work

In this work, we characterize the probability of smearing and developing a smearing-based attack on the Decision PLWE problem. By extension, our analysis also bears influence on the security of the RLWE problems through the RLWE to PLWE reduction [8] (for a specific class of rings). In the future work, we plan to examine the practical implementation and effectiveness of the smearing-based algorithm for solving PLWE and RLWE.

  1. Funding information: This research was supported by the National Science Foundation under the grant DMS-1659872.

  2. Conflict of interest: Authors state no conflict of interest.

References

[1] Albrecht MR, Deo A. Large modulus ring-LWE geq module-LWE. ASIACRYPT 2017. 2017;10624:267–96. 10.1007/978-3-319-70694-8_10Search in Google Scholar

[2] Banerjee A, Peikert C, Rosen A. Pseudorandom functions and lattices, Lecture Notes Comput Sci. 2012;7237:719–37. 10.1007/978-3-642-29011-4_42Search in Google Scholar

[3] Brakerski Z, Vaikuntanathan V. Fully homomorphic encryption from Ring-LWE and security for key dependent messages. Lecture Notes Comput Sci. 2011;6841:505–24. 10.1007/978-3-642-22792-9_29Search in Google Scholar

[4] Chen Y, Case BM, Gao S, Gong G. Error analysis of weak Poly-LWE instances. Cryptography Commun. 2019;11:411–26. 10.1007/s12095-018-0301-xSearch in Google Scholar

[5] Damgärd I, Polychroniadou A, Adaptively R. Secure Multi-Party Computation from LWE. Lecture Notes Comput Sci. 2016;9615:208–33. 10.1007/978-3-662-49387-8_9Search in Google Scholar

[6] Galbraith SD. Mathematics of public key cryptography. 1st edition. Cambridge, United Kingdom: Cambridge University Press; 2012. 10.1017/CBO9781139012843Search in Google Scholar

[7] Elias Y, Lauter KE, Ozman E, Stange KE. Provably weak instances of ring-LWE. Lecture Notes Comput Sci. 2015;9215:63–92. 10.1007/978-3-662-47989-6_4Search in Google Scholar

[8] Elias Y, Lauter KE, Ozman E, Stange KE. Ring-LWE cryptography for the number theorist. In: Directions in number theory. Association for women in mathematics series. Vol. 3. Cham: Springer; 2016. p. 271–90. 10.1007/978-3-319-30976-7_9Search in Google Scholar

[9] Erdös P, Rényi A. On a classical problem of probability theory. Magyar Tudományos Akadémia Matematikai Kutató Intézetének Közleményei. 1961;6:215–20. Search in Google Scholar

[10] Ferrante M, Saltalamacchia M. The coupon collectoras problem. MATerials MATemàtics. 2014;2014(2):35. ISSN: 1887-1097. Search in Google Scholar

[11] Grover LK. A fast quantum mechanical algorithm for database search. In: Proceedings of the 28th Annual ACM Symposium on the Theory of Computing (STOC). Pennsylvania; 1996. p. 212–9. 10.1145/237814.237866Search in Google Scholar

[12] Hoffstein J, Pipher J, Silverman JH. NTRU: a ring based public key cryptosystem. Lecture Notes Comput Sci. 1998;1423:267–88. 10.1007/BFb0054868Search in Google Scholar

[13] Lindner R, Peikert C. Better key sizes (and attacks) for LWE-based encryption. Lecture Notes Comput Sci. 2001;6558:319–39. 10.1007/978-3-642-19074-2_21Search in Google Scholar

[14] Regev O. On lattices, learning with errors, random linear codes, and cryptography. JACM. 2009;56(6):84–93. 10.1145/1060590.1060603Search in Google Scholar

[15] Lyubashevsky V, Peikert C, Regev O. On ideal lattices and learning with errors over rings. Lecture Notes Comput Sci. 2010;6110:1–25. 10.1007/978-3-642-13190-5_1Search in Google Scholar

[16] Micciancio D, Regev O. Lattice-based cryptography. In: Bernstein DJ, Buchmann J, Dahmen E, (eds). Post-quantum cryptography. Berlin, Heidelberg: Springer; 2009. p. 147–91. 10.1007/978-3-540-88702-7_5Search in Google Scholar

[17] National institute of standards and technology, announcing request for nominations for public-key post-quantum cryptographic algorithms. Federal Register. 2016;81(244):92787–8. Search in Google Scholar

[18] Peikert C. Lattice cryptography for the internet. In: Mosca M, (eds). Post-quantum cryptography, lecture notes in computer science. Vol 8772. Cham: Springer; 2014. 10.1007/978-3-319-11659-4_12Search in Google Scholar

[19] Shor PW. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J Comput. 1997;26(5):1484–509. 10.1137/S0097539795293172Search in Google Scholar

[20] Wang T, Yu J, Zhang R, Zhang Y. Efficient signature schemes from R-LWE. Trans Internet Inf Syst. 2016;10:3911–24. 10.3837/tiis.2016.08.026Search in Google Scholar

Received: 2020-08-14
Revised: 2022-01-19
Accepted: 2022-06-08
Published Online: 2022-08-10

© 2022 Liljana Babinkostova et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 7.11.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jmc-2020-0035/html?lang=en&srsltid=AfmBOopi4fdrtJMS8WuMcrIIdJ53A16bzfkLJ4PPqfruhOR1A5oh8JqT
Scroll to top button