Home A detailed analysis of the hybrid lattice-reduction and meet-in-the-middle attack
Article Open Access

A detailed analysis of the hybrid lattice-reduction and meet-in-the-middle attack

  • Thomas Wunderer EMAIL logo
Published/Copyright: October 5, 2018
Become an author with De Gruyter Brill

Abstract

Over the past decade, the hybrid lattice-reduction and meet-in-the middle attack (called hybrid attack) has been used to evaluate the security of many lattice-based cryptographic schemes such as NTRU, NTRU Prime, BLISS and more. However, unfortunately, none of the previous analyses of the hybrid attack is entirely satisfactory: They are based on simplifying assumptions that may distort the security estimates. Such simplifying assumptions include setting probabilities equal to 1, which, for the parameter sets we analyze in this work, are in fact as small as 2-80. Many of these assumptions lead to underestimating the scheme’s security. However, some lead to security overestimates, and without further analysis, it is not clear which is the case. Therefore, the current security estimates against the hybrid attack are not reliable, and the actual security levels of many lattice-based schemes are unclear. In this work, we present an improved runtime analysis of the hybrid attack that is based on more reasonable assumptions. In addition, we reevaluate the security against the hybrid attack for the NTRU, NTRU Prime and R-BinLWEEnc encryption schemes as well as for the BLISS and GLP signature schemes. Our results show that there exist both security over- and underestimates in the literature.

MSC 2010: 94A60; 11T71

1 Introduction

In 2007, Howgrave-Graham proposed the hybrid lattice-reduction and meet-in-the-middle attack [20] (referred to as the hybrid attack in the following) against the NTRU encryption scheme [19]. Several works [20, 21, 17, 18, 30] claim that the hybrid attack is by far the best known attack on NTRUEncrypt. In the following years, numerous cryptographers have applied the hybrid attack to their cryptographic schemes in order to estimate their security. These considerations include more variants of the NTRU encryption scheme [17, 18, 30], the recently proposed encryption scheme NTRU Prime [8], a lightweight encryption scheme based on Ring-LWE with binary error [10, 9], and the signature schemes BLISS [14] and GLP [16, 14]. However, the above analyses of the hybrid attack make use of over-simplifying assumptions, yielding unreliable security estimates of the schemes. Many of these assumptions lead to more conservative security estimates (lower than necessary), as they give more power to the attacker. While this is not a problem from a security perspective, in those cases, the schemes might be instantiated more efficiently while preserving the desired security level. On the other hand, there also exist the more dangerous cases in which the security of a scheme is overestimated, i.e., the scheme is less secure than it is claimed to be, as we show in this work. In [30], Schanck summarizes the current state of the analyses of the hybrid attack as follows:

“[…] it should be noted that in the author’s opinion, no analysis of the hybrid attack presented thus far is entirely satisfactory. […] it is hoped that future work will answer some of the outstanding questions related to the attack’s probability of success as a function of the effort spent in lattice reduction and enumeration.”

One of the most common [21, 14, 18, 30, 8] over-simplifications is to assume that collisions in the meet-in-the-middle phase of the attack will always be detected. However, in reality, collisions can only be detected with some (possibly very low) probability. For instance, for the cryptographic schemes we analyze in this work, this probability is sometimes as low as 2-80, highlighting the unrealistic nature of some simplifying assumptions made in previous analyses of the hybrid attack.

Our contribution

In this work, we provide a detailed and more satisfying analysis of the hybrid attack. This is achieved in the following way. We present a generalized version of the hybrid attack applied to shortest vector problems (SVP) and show how it can also be used to solve bounded distance decoding (BDD) problems. This general framework for the hybrid attack can naturally be applied to many lattice-based cryptographic constructions, as we also show in this work. We further provide a detailed and improved analysis of the generalized version of the hybrid attack, which can be used to derive updated security estimates. We offer two types of formulas for our security estimates – one that reflects the current state of the art and a more conservative one that reflects potential advances in cryptanalysis – giving a possible range for the security level. In our analysis of the attack, we reduce the amount of underlying assumptions, eliminate the ones that are over-simplifying and clearly state the remaining ones in order to offer as much transparency as possible. We further provide some experimental results and comparisons to the literature to support the validity of the remaining assumptions on the involved success probabilities.

Our second main contribution is the following: Since previous analyses of the hybrid attack are unreliable, the security estimates of many lattice-based cryptographic schemes against the hybrid attack might be inaccurate, and their actual security level is unclear. We therefore apply our improved analysis to reevaluate the security of various cryptographic schemes against the hybrid attack in order to derive updated security estimates.[1] We first revisit the security against the hybrid attack of the NTRU [18], NTRU Prime [8] and R-BinLWEEnc [9] encryption schemes, and end with the BLISS [14] and GLP [16] signature schemes. Our results show that there exist both security over- and underestimates against the hybrid attack across the literature.

Outline

This work is structured as follows. First, we fix notation and provide the necessary background in Section 2. In Section 3, we describe a generalized version of the hybrid attack on shortest vector problems (SVP) and further explain how it can also be used to solve bounded distance decoding (BDD) problems. Our runtime analysis of the hybrid attack is presented in Section 4. In Section 5, we apply our analysis of the hybrid attack to various cryptographic schemes in order to derive updated security estimates against the hybrid attack. We end this work by giving a conclusion and outlook for possible future work.

2 Preliminaries

Notation.

In this work, we write vectors in bold lowercase letters, e.g., 𝐚, and matrices in bold uppercase letters, e.g., 𝐀. Polynomials are written in normal lower case letters, e.g., a. We frequently identify polynomials a=i=0naixi with their coefficient vectors 𝐚=(a0,,an), indicated by using the corresponding bold letter. Let n,q, f[x] be a polynomial of degree n and Rq=q[x]/(f). We define the rotation matrix of a polynomial aRq as rot(a)=(𝐚,𝐚𝐱,𝐚𝐱2,,𝐚𝐱n-1)qn×n. Then for a,bRq, the matrix-vector product rot(a)𝐛modq corresponds to the product of polynomials abRq.

We use the abbreviation log() for log2(). We further write instead of 2 for the Euclidean norm. For N0 and m1,,mk0 with m1++mk=N, the multinomial coefficient is defined as

(Nm1,,mk)=N!m1!mk!.

Lattices and bases.

In this work, we use the following definition of lattices. A discrete additive subgroup of m, for some m, is called a lattice. Let m be a positive integer. For a set of vectors 𝐁={𝐛1,,𝐛n}m, the lattice spanned by 𝐁 is defined as

Λ(𝐁)={𝐱m|𝐱=i=1nαi𝐛iforαi}.

Let Λm be a lattice. A set of vectors 𝐁={𝐛1,,𝐛n}m is called a basis of Λ if 𝐁 is -linearly independent and Λ=Λ(𝐁). Abusing notation, we identify lattice bases with matrices and vice versa by taking the basis vectors as the columns of the matrix. The number of vectors in a basis of a lattice is called the rank (or dimension) of the lattice. If the rank is maximal, i.e., m=n, the lattice is called full-rank. In the following, we only consider full-rank lattices. Let q be a positive integer. The length of the shortest non-zero vector of a lattice Λ is denoted by λ1(Λ). An integer lattice Λm that contains qm is called a q-ary lattice. For a matrix 𝐀qm×n, we define the q-ary lattice

Λq(𝐀):-{𝐯mthere exists𝐰nsuch that𝐀𝐰=𝐯modq}.

For a lattice basis 𝐁={𝐛1,,𝐛m}m, its fundamental parallelepiped is defined as

𝒫(𝐁)={𝐱=i=1mαi𝐛im|-12αi<12for alli{1,,m}}.

The determinant det(Λ) of a lattice Λm is defined as the m-dimensional volume of the fundamental parallelepiped of a basis of Λ. Note that the determinant of the lattice is well-defined, i.e., it is independent of the basis. The Hermite delta (or Hermite factor) δ of a lattice basis 𝐁={𝐛1,,𝐛m}m is defined via the equation 𝐛1=δmdet(Λ)1m. It provides a measure for the quality of the basis.

Lattice-based cryptography is based on the presumed hardness of computational problems in lattices. Two of the most important lattice problems are the following:

Problem (Shortest vector problem, SVP).

Given a lattice basis B, the task is to find a shortest non-zero vector in the lattice Λ(B).

Problem (Bounded distance decoding, BDD).

Given αR0, a lattice basis BRm and a target vector tRm with dist(t,Λ(B))<αλ1(Λ(B)), the task is to find a vector eRm with e<αλ1(Λ(B)) such that t-eΛ(B).

Babai’s nearest plane.

Babai’s nearest plane algorithm [5] (denoted by NP in the following) is an important building block of the hybrid attack. For more details on the algorithm, we refer to Babai’s original work [5] or Lindner and Peikert’s work [24]. We use the nearest plane algorithm in a black box manner. For the reader, it is sufficient to know the following: The input for the nearest plane algorithm is a lattice basis 𝐁m and a target vector 𝐭m, and the corresponding output is a vector 𝐞m such that 𝐭-𝐞Λ(𝐁). We denote the output by NP𝐁(𝐭)=𝐞. If there is no risk of confusion, we might omit the basis in the notation, writing NP(𝐭) instead of NP𝐁(𝐭). The output of the nearest plane algorithm satisfies the following condition, as shown in [5].

Lemma 2.1.

Let BZm be a lattice basis, and let tRm be a target vector. Then NPB(t) is the unique vector eP(B¯) that satisfies t-eΛ(B), where B¯ is the Gram–Schmidt basis of B.

The lengths of the Gram–Schmidt vectors of a reduced basis can be estimated by the following heuristic (for more details, we refer to [24]).

Heuristic (Geometric series assumption (GSA)).

Let BZm be a reduced basis of some full-rank lattice with Hermite delta δ, and let D denote the determinant of Λ(B). Further, let b¯1,,b¯m denote the corresponding Gram–Schmidt vectors of B. Then the length of b¯i is approximately

𝐛¯iδ-2(i-1)+mD1m.

While the GSA is widely relied upon in lattice-based cryptography (see, e.g., [2, 3, 4, 10, 13, 27, 20]), we emphasize that it does not offer precise estimates, in particular for the last indices of highly reduced bases, see, e.g., [12].

3 The hybrid attack

In this section, we present a generalized version of the hybrid attack to solve shortest vector problems. Our framework for the hybrid attack is the following: The task is to find the shortest vector 𝐯 in a lattice Λ, given a basis of Λ of the form

𝐁=(𝐁𝐂𝟎𝐈r)m×m,

where 0<r<m is the meet-in-the-middle dimension, 𝐁(m-r)×(m-r) and 𝐂(m-r)×r. In Appendix A.1, we show that for q-ary lattices, where q is prime, one can always construct a basis of this form, provided that the determinant of the lattice is at most qm-r. Additionally, in Section 5, we show that our framework can be applied to many lattice-based cryptographic schemes.

The main idea of the attack is the following: Let 𝐯 be a short vector contained in the lattice Λ. We split the short vector 𝐯 into two parts 𝐯=(𝐯l,𝐯g)t with 𝐯lm-r and 𝐯gr. The second part 𝐯g represents the part of 𝐯 that is recovered by guessing (meet-in-the-middle) during the attack, while the first part 𝐯l is recovered with lattice techniques (solving BDD problems). Because of the special form of the basis 𝐁, we have that

𝐯=(𝐯l𝐯g)=𝐁(𝐱𝐯g)=(𝐁𝐱+𝐂𝐯g𝐯g)

for some vector 𝐱m-r, hence 𝐂𝐯g=-𝐁𝐱+𝐯l. This means 𝐂𝐯g is close to the lattice Λ(𝐁), since it only differs from the lattice by the short vector 𝐯l, and therefore 𝐯l can be recovered solving a BDD problem if 𝐯g is know. The idea now is that if we can correctly guess the vector 𝐯g, we can hope to find 𝐯l using the nearest plane algorithm (see Section 2) via NP𝐁(𝐂𝐯g)=𝐯l, which is the case if the basis 𝐁 is sufficiently reduced. Solving the BDD problem using the nearest plane algorithm is the lattice part of the attack. The lattice Λ(𝐁) in which we need to solve BDD has the same determinant as the lattice Λ(𝐁) in which we want to solve SVP, but it has smaller dimension, i.e., m-r instead of m. Therefore, the newly obtained BDD problem is potentially easier to solve than the original SVP instance.

In the following, we explain how one can speed up the guessing part of the attack by Odlyzko’s meet-in-the-middle approach. Using this technique, one is able to reduce the number of necessary guesses to the square root of the number of guesses needed in a naive brute-force approach. Odlyzko’s meet-in-the-middle attack on NTRU was first described in [22] and applied in the hybrid lattice-reduction and meet-in-the-middle attack against NTRU in [20]. The idea is that instead of guessing 𝐯g directly in a large set M of possible vectors, we guess sparser vectors 𝐯g and 𝐯g′′ in a smaller set N of vectors such that 𝐯g+𝐯g′′=𝐯g. In our attack, the larger set M will be the set of all vectors with a fixed number 2ci of the non-zero entries equal to i for all i{±1,,±k}, where k=𝐯g. The smaller set N will be the set of all vectors with only half as many, i.e., only ci, of the non-zero entries equal to i for all i{±1,,±k}. Assume that NP𝐁(𝐂𝐯g)=𝐯l. First, we guess vectors 𝐯g and 𝐯g′′ in the smaller set N. We then compute 𝐯l=NP𝐁(𝐂𝐯g) and 𝐯l′′=NP𝐁(𝐂𝐯g′′). We hope that if 𝐯g+𝐯g′′=𝐯g, then also 𝐯l+𝐯l′′=𝐯l, i.e., that the nearest plane algorithm is additively homomorphic on those inputs. The probability that this additive property holds is one crucial element in the runtime analysis of the attack. We further need to detect when this property holds during the attack, i.e., we need to be able to recognize matching vectors 𝐯g and 𝐯g′′ with 𝐯g+𝐯g′′=𝐯g and 𝐯l+𝐯l′′=𝐯l, which we call a collision. In order to do so, we store 𝐯g and 𝐯g′′ in (hash) boxes whose addresses depend on 𝐯l and 𝐯l′′, respectively, such that they collide in at least one box. To define those addresses properly, note that in case of a collision, we have 𝐯l=-𝐯l′′+𝐯l. Thus 𝐯l and -𝐯l′′ differ only by a vector of infinity norm y=𝐯l. Therefore, the addresses must be crafted such that for any 𝐱m and 𝐳m with 𝐳y, it holds that the intersection of the addresses of 𝐱 and 𝐱+𝐳 is non-empty, i.e., 𝒜𝐱(m,y)𝒜𝐱+𝐳(m,y). Furthermore, the set of addresses should not be unnecessarily large so the hash tables do not grow too big and unwanted collisions are unlikely to happen. The following definition satisfies these properties, as can easily be verified.

Definition 3.1.

Let m,y. For a vector 𝐱m, the set 𝒜𝐱(m,y){0,1}m is defined as

𝒜𝐱(m,y)={𝐚{0,1}m(𝐚)i=1if(𝐱)i>y2-1fori{1,,m},(𝐚)i=0if(𝐱)i<-y2fori{1,,m}}.

We illustrate Definition 3.1 with some examples.

Example.

Let m=5 be fixed. For varying bounds y and input vectors 𝐱, we have

𝒜(7,0,-1,1,-5)(5,1)={(1,0,0,1,0),(1,1,0,1,0)}
𝒜(8,0,-1,1,-2)(5,2)={(1,0,0,1,0),(1,1,0,1,0),(1,0,1,1,0),(1,1,1,1,0)}
𝒜(2,-1,9,1,-2)(5,3)={(1,0,1,0,0),(1,0,1,1,0),(1,1,1,0,0),(1,1,1,1,0)}
𝒜(2,-5,0,7,-2)(5,4)={(1,0,0,1,0),(1,0,0,1,1),(1,0,1,1,0),(1,0,1,1,1)}

The hybrid attack on SVP without precomputation is presented in Algorithm 1. A list of the attack parameters and the parameters used in the runtime analysis of the attack and their meaning is given in Table 1. In order to increase the chance of Algorithm 1 being successful, one performs a basis reduction step as precomputation. Therefore, the complete hybrid attack, presented in Algorithm 2, is in fact a combination of a basis reduction step and Algorithm 1.

Algorithm 1 (The hybrid attack on SVP without basis reduction.).

Algorithm 2 (The hybrid attack on SVP including basis reduction.).

Table 1

Attack parameters and parameters in the runtime analysis.

ParameterMeaning
mlattice dimension
rmeet-in-the-middle dimension
𝐁lattice basis of the whole lattice
𝐁partially reduced lattice basis of the sublattice
cinumber of i-entries guessed during attack
yinfinity norm bound on 𝐯l
kinfinity norm bound on 𝐯g
Yexpected Euclidean norm of 𝐯l
RiGram–Schmidt lengths corresponding to 𝐁
riscaled Gram–Schmidt lengths corresponding to 𝐁

The hybrid attack on BDD

The hybrid attack can also be applied to BDD instead of SVP by rewriting a BDD instance into an SVP instance. This can be done in the following way (see for example [1]): Let 𝐁 be a lattice basis of the form

𝐁=(𝐁𝐂𝟎𝐈r)m×m

with 𝐁(m-r)×(m-r),𝐂(m-r)×r, and let 𝐭 be a target vector for BDD. Suppose 𝐭-𝐯Λ(𝐁), where 𝐯 is the short (bounded) vector we are looking for. Then the short vector (𝐯,1)t is contained in the lattice Λ(𝐁′′) spanned by

𝐁′′=(𝐁𝐭𝟎1)(m+1)×(m+1),

which is of the required form for the hybrid attack on SVP. Therefore, we can apply the hybrid attack on SVP to find (𝐯,1)t, solving the BDD problem. The SVP lattice Λ(𝐁′′) has the same determinant as the BDD lattice Λ(𝐁) and dimension m+1 instead of m. However, the additional dimension can be ignored, since we know the last entry of (𝐯,1)t and therefore do not have to guess it during the meet-in-the-middle phase. Note that by definition of BDD, it is very likely that ±𝐯 are the only short vectors in the lattice Λ(𝐁′′). By fixing the last coordinate to be plus one, only 𝐯, not also -𝐯, can be found by the attack.

4 Analysis

In this section, we analyze the runtime of the hybrid attack. First, in Heuristic 1 in Section 4.1, we estimate the runtime of the attack in case sufficient success conditions are satisfied. In Section 4.2, we then show how to determine the probability that those sufficient conditions are satisfied, i.e., how to determine (a lower bound on) the success probability. We conclude the runtime analysis of the attack by showing how to optimize the attack parameters to minimize its runtime in Section 4.3. We end the section by highlighting our improvements over previous analyses of the hybrid attack, see Section 4.4.

4.1 Runtime analysis

We now present our main result about the runtime of the generalized hybrid attack. It shows that under sufficient conditions, the attack is successful and estimates the expected runtime.

Heuristic 1.

Let all inputs be denoted as in Algorithm 1, YR0, and let R1,,Rm-r denote the lengths of the Gram–Schmidt basis vectors of the basis B. Further, let SΛ(B) denote the set of all non-zero lattice vectors v=(vl,vg)tΛ(B), where vlZm-r and vgZr with vly, vlY, vgk, exactly 2ci entries of vg are equal to i for all i{±1,,±k}, and NPB(Cvg)=vl. Assume that the set S is non-empty.

Then Algorithm 1 is successful, and the expected number of loops can be estimated by

L=(rc-k,,ck)(p|S|i{±1,,±k}(2cici))-12,

where

p=i=1m-r(1-1riB((m-r)-12,12)-ri-1-rimax(-1,z-ri)z+ri(1-t2)(m-r)-32dtdz),

B(,) denotes the Euler beta function [28], and

ri=Ri2Yfor alli{1,,m-r}.

Furthermore, the expected number of operations of Algorithm 1 can be estimated by TNPL, where TNP denotes the number of operations of one nearest plane call in the lattice Λ(B) of dimension m-r.

In the following remark, we explain the meaning of the (attack) parameters that appear in Heuristic 1 in more detail.

Remark 4.1.

  1. The parameters r, y, k, c-k,,ck are the attack parameters that can be chosen by the attacker. The meet-in-the-middle dimension and the remaining lattice dimension are determined by the parameter r. The remaining parameters must be chosen in such a way that the requirements of Heuristic 1 are likely to be fulfilled in order to obtain a high success probability of the attack. Choosing those parameters depends heavily on the distribution of the short vectors 𝐯S. In order to obtain more flexibility, this distribution is not specified in Heuristic 1. However, in Section 5, we show how one can choose the attack parameters and calculate the success probability for several distributions arising in various cryptographic schemes. At this point, we only want to remark that y should be (an upper bound on) 𝐯l, k (an upper bound on) 𝐯g and 2ci the (expected) number of entries of 𝐯g that is equal to i for i{±1,,±k}.

  2. The attacker can further influence the lengths R1,,Rm-r of the Gram–Schmidt vectors by providing a different basis than 𝐁 with Gram–Schmidt lengths that lead to a more efficient attack. This is typically done by performing a basis reduction on 𝐁 or parts of 𝐁 as precomputation, see Algorithm 2. The lengths of the Gram–Schmidt vectors achieved by the basis reduction with Hermite delta δ are typically estimated by the GSA (see Section 2). However, for bases of q-ary lattices of a special form, the GSA may be modified. For further details, see Appendix A.2. Notice that spending more time on basis reduction increases the probability p in Heuristic 1, and the probability that the condition NP𝐁(𝐂𝐯g)=𝐯l holds, as can be seen later in this section and Section 4.2.

  3. Because of the previous remark, the complete attack – presented in Algorithm 2 – is actually a combination of precomputation (basis reduction) and Algorithm 1. Therefore, the runtime of both steps must be considered, and they have to be balanced in order to estimate the total runtime. In particular, the amount of precomputation must be chosen such that the precomputed basis offers the best trade-off between its quality with respect to the hybrid attack (i.e., amplifying the success probability and decreasing the number of operations) and the cost to compute this basis. We show how to optimize the total runtime in Section 4.3.

In the following, we show how Heuristic 1 can be derived. For the rest of this section, let all notations be as in Heuristic 1. We further assume in the following that the assumption of Heuristic 1, i.e., S, is satisfied. We first provide the following useful definition already given in [20, 10]. We use the notation of [10].

Definition 4.2.

Let n. A vector 𝐱n is called 𝐲-admissible (with respect to the basis 𝐁) for some vector 𝐲n if NP𝐁(𝐱)=NP𝐁(𝐱-𝐲)+𝐲.

This means, that if 𝐱 is 𝐲-admissible then NP𝐁(𝐱) and NP𝐁(𝐱-𝐲) yield the same lattice vector. We recall the following Lemma from [10] about Definition 4.2. It showcases the relevance of the definition by relating it to the equation NP𝐁(𝐭1)+NP𝐁(𝐭2)=NP(𝐭1+𝐭2), which is necessary to hold for our attack to work.

Lemma 4.3 ([10, Lemma 2]).

Let t1Rn,t2Rn be two arbitrary target vectors. Then the following are equivalent.

  1. NP𝐁(𝐭1)+NP𝐁(𝐭2)=NP𝐁(𝐭1+𝐭2).

  2. 𝐭1 is NP(𝐭1+𝐭2)-admissible.

  3. 𝐭2 is NP(𝐭1+𝐭2)-admissible.

Success of the attack and number of loops

We now estimate the expected number of loops in case Algorithm 1 terminates. In the following, we use the subscript 𝐁 for probabilities to indicate that the probability is taken over the randomness of the basis (with Gram–Schmidt length R1,,Rm-r). In each loop of the algorithm, we sample a vector 𝐯g in the set

W={𝐰rexactlycientries of𝐰are equal toifor alli{-k,,k}}.

The attack succeeds if 𝐯gW and 𝐯g′′W such that 𝐯g+𝐯g′′=𝐯g and

NP𝐁(𝐂𝐯g)+NP𝐁(𝐂𝐯g′′)=NP𝐁(𝐂𝐯g+𝐂𝐯g′′)=𝐯l

for some vector 𝐯=(𝐯l,𝐯g)tS are sampled in different loops of the algorithm. By Lemma 4.3, the second condition is equivalent to the fact that 𝐂𝐯g is 𝐯l-admissible. We assume that the algorithm only succeeds in this case. We are therefore interested in the following subset of W:

V={𝐰W𝐯g-𝐰Wand𝐂𝐰is𝐯l-admissible for some𝐯=(𝐯l,𝐯g)tS}.

For all 𝐯=(𝐯l,𝐯g)tS with 𝐯lm-r and 𝐯gr, let p(𝐯) denote the probability

p(𝐯)=Pr𝐁,𝐰W[𝐂𝐰is𝐯l-admissible],

and let p1(𝐯) denote the probability

p1(𝐯)=Pr𝐰W[𝐯g-𝐰W].

By construction, we have that p1(𝐯) is constant for all 𝐯S, so we can simply write p1 instead of p1(𝐯). We make the following reasonable assumption on p(𝐯) and p1.

Assumption 1.

For all v=(vl,vg)tS with vlZm-r and vgZr, we assume that the independence condition

p(𝐯)=Pr𝐁,𝐰W[𝐂𝐰𝑖𝑠𝐯l-admissible𝐯g-𝐰W]

holds. We further assume that p(v) is equal to some constant probability p (as in Heuristic 1) for all vS.

Assuming independence of p and p1 and disjoint events for the elements of S, we can make the following reasonable assumption (analogously to [20, Lemma 6 and Theorem 3]).

Assumption 2.

We assume that

|V||W|=Pr𝐰W[𝐰V]=p1p|S|.

The probability p1 is calculated by

p1=i{±1,,±k}(2cici)|W|,where|W|=(rc-k,,ck).

From Assumption 2, it follows that |V|p1p|W||S|. As long as the product p1p is not too small, we can therefore assume that V.

Assumption 3.

We assume that V.

Assumption 3 implies that the attack is successful, since by Lemma 4.3, if 𝐯gV, then also 𝐯g′′=𝐯g-𝐯gV for all 𝐯=(𝐯l,𝐯g)tS. Such two vectors 𝐯g and 𝐯g′′ in V will eventually be guessed in two separate loops of the algorithm, and they are recognized as a collision, since by the assumption 𝐯ly of Heuristic 1, they share at least one common address. By Assumption 2, we expect that during the algorithm, we sample in V every 1p1p|S| loops, and by the birthday paradox, we expect to find a collision 𝐯gV and 𝐯g′′V with 𝐯g′′+𝐯g=𝐯g after L1p1p|S||V| loops. In conclusion, we can estimate the expected number of loops by

L|V|p1p|S|=|W|p1p|S|=(rc-k,,ck)(p|S|i{±1,,±k}(2cici))-12.

It remains to calculate the probability p. This can be done analogously to [10, Heuristic 3] and the calculations following it. For a detailed and convincing justification of the heuristic and the intuition behind it, including a geometric intuition behind the 𝐲-admissibility and its mathematical modeling, we refer to [10]. Following the calculations of [10], we obtain the following assumption.

Assumption 4.

We assume that the probability p is approximately

pi=1m-r(1-1riB((m-r)-12,12)-ri-1-rimax(-1,z-ri)z+ri(1-t2)(m-r)-32dtdz),

where B(,) and r1,,rm-r are defined as in Heuristic 1.

The integrals in the above formula for p can, for instance, be calculated using SageMath [31]. In order to calculate p, one needs to estimate the lengths ri, as discussed in the Remark 4.1. In Appendix A.3, we provide the results of some preliminary experiments supporting the validity of Assumption 4.

Number of operations

We now estimate the expected total number of operations of the hybrid attack under the conditions of Heuristic 1. In order to do so, we need to estimate the runtime of one inner loop and multiply it by the expected number of loops. As in [20] and [10], we make the following assumption, which is plausible as long the sets of addresses are not extremely large.

Assumption 5.

We assume that the number of operations of one inner loop of Algorithm 1 is dominated by the number of operations of one nearest plane call.

We remark that we see Assumption 5 as one of the more critical ones. Obviously, it does not hold for all parameter choices[2], but it is reasonable to believe that it holds for many relevant parameter sets, as claimed in [20] and [10]. However, the claim in [20] is based on the observation that for random vectors in qm, it is highly unlikely that adding a binary vector will flip the sign of many coordinates (i.e., that a random vector in qm has many minus one coordinates). While this is true, the vectors in question are in fact not random vectors in qm but outputs of a nearest plane call, and thus potentially shorter than typical vectors in qm. Therefore, it can be expected that adding a binary vector will flip more signs. Additionally, in general, it is not only a binary vector that is added, but a vector of infinity norm y, which makes flipping signs even more likely. However, we believe that Assumption 5 is still plausible for most relevant parameter sets and small y, and even in the worst case the assumption leads to more conservative security estimates.

In [17], Hirschhorn et al. give an experimentally verified number of bit operations (defined as in [23]) of one nearest plane call and state a conservative assumption on the runtime of the nearest plane algorithm using precomputation. Based on their results, we use the following assumption for our security estimates. We provide two different kinds of security estimates, one which we call “standard” (std) and one which we call “conservative” (cons). The latter accounts for possible cryptanalytic improvements which are plausible but not yet known to be applicable.

Assumption 6.

Let dN be the lattice dimension. For our standard security estimates, we assume that the number of bit operations of one nearest plane call is approximately d2/21.06. For our conservative security estimates, we assume that the number of bit operations of one nearest plane call is approximately d/21.06.

4.2 Determining the success probability

In Heuristic 1, it is guaranteed that Algorithm 1 is successful if the lattice Λ contains a non-empty set S of short vectors of the form 𝐯=(𝐯l,𝐯g)t, where 𝐯lm-r and 𝐯gr with 𝐯lY, 𝐯ly, 𝐯gk, exactly 2ci entries of 𝐯g are equal to i for all i{±1,,±k}, and NP𝐁(𝐂𝐯g)=𝐯l. In order to determine a lower bound on the success probability, one must calculate the probability that the set S of such vectors is non-empty, since

psuccPr[S].

However, this probability depends heavily on the distribution of the short vectors contained in Λ and is therefore not done in Heuristic 1, allowing for more flexibility. In consequence, this analysis must be performed for the specific distribution at hand, originating from the cryptographic scheme that is to be analyzed. The most involved part in calculating the success probability is typically calculating the probability pNP that NP𝐁(𝐂𝐯g)=𝐯l. As shown in [10], the probability pNP is approximately

(4.1)pNPi=1m-r(1-2B((m-r)-12,12)-1max(-ri,-1)(1-t2)(m-r)-32dt),

where ri are defined as in Heuristic 1.

In [24], Lindner and Peikert calculated the success probability of the nearest plane(s) algorithm for the case that the difference vector is drawn from a discrete Gaussian distribution with standard deviation σ. In our case, this would result in the formula

(4.2)pNP=Pr[NP𝐁𝐂𝐯g𝐯l]=i=1m-rerf(Ri2σ).

In the following, we compare the formulas (4.1) and (4.2) in the case of discrete Gaussian distributions with standard deviation σ. To this end, we evaluated both formulas for a lattice of dimension d=m-r=200 of determinant 128100 for different standard deviations. For formula (4.1), we assumed that the norm of 𝐯l is σ200 as expected, and that the basis follows the GSA with Hermite delta 1.008. The results, presented in Table 2, show that both formulas virtually give the same results for the analyzed instances. This indicates that formula (4.1) is a good generalization of the one provided in [24].

Table 2

Comparison of (4.1) and (4.2) for standard deviation σ=s/2π and varying Gaussian parameter s.

Gaussian parameters=1s=2s=4s=8s=16
pNP according to (4.1)2-0.0332-3.6582-27.7752-87.5062-188.445
pNP according to (4.2)2-0.0362-3.6692-27.6802-87.2172-187.932

4.3 Optimizing the runtime

The final step in our analysis is to determine the runtime of the complete hybrid attack (Algorithm 2) including precomputation, which involves the runtime of the basis reduction Tred, the runtime of the actual attack Thyb and the success probability psucc. All these quantities depend on the attack parameter r and the quality of the basis 𝐁 given by the lengths of the Gram–Schmidt vectors achieved by the basis reduction performed in the precomputation step of the attack. The quality of the basis can be measured by its Hermite delta δ (see Section 2). In order to unfold the full potential of the attack, one must minimize the runtime over all possible attack parameters r and δ. For our standard security estimates, we assume that the total runtime (which is to be minimized) is given by

Ttotal,std(δ,r)=Tred,std(δ,r)+Thyb,std(δ,r)psucc(δ,r).

For our conservative security estimates, we assume that given a reduced basis with quality δ, it is significantly easier to find another reduced basis with same quality δ than it is to find one given an arbitrary non-reduced basis. We therefore assume that even if the attack is not successful and needs to be repeated, the large precomputation cost for the basis reduction only needs to be paid once, and hence

Ttotal,cons(δ,r)=Tred,cons(δ,r)+Thyb,cons(δ,r)psucc(δ,r).

In order to calculate Ttotal,cons(δ,r) and Ttotal,std(δ,r), one must calculate Thyb,cons(δ,r), Thyb,std(δ,r), Tred,cons(δ,r), Tred,std(δ,r) and psucc(δ,r). How to calculate Ttotal,cons(δ,r) and Ttotal,std(δ,r) is shown in Heuristic 1. The success probability psucc(δ,r) is calculated in Section 4.2.

Basis reduction.

Estimating the necessary runtime Tred(δ,r) for a basis reduction of quality δ is highly non-trivial and still an active research area, and precise estimates are hard to derive. For this reason, our framework is designed such that the cost model for basis reduction can be replaced by a different one while the rest of the analysis remains intact. Thus, if future research shows significant improvements in estimating the cost of basis reduction, these cost models can be applied in our framework. To illustrate our method, we fix two common approaches to estimate the cost of basis reduction. For our standard security estimates, we apply the following approach. We first determine the (minimal) block size β necessary to achieve the targeted Hermite delta δ via

δ(β(πβ)1β2πe)12(β-1)

according to Chen’s thesis [12] (see also, e.g., [27]). We then use the BKZ 2.0 simulator[3] of the full version of [13] to determine the corresponding necessary number of rounds k. Finally, we use the estimate

Estimatestd(β,n,k)=0.187281βlog(β)-1.0192β+log(nk)+16.1

provided in [2] to determine the (base-two) logarithm of the runtime, where n is the lattice dimension.

For the conservative security estimates, we assume that only one round of BKZ 2.0 with the determined block size β is needed. The reason for this assumption is that one can use progressive BKZ strategies to reduce the number of rounds needed with blocksize β by running BKZ with block sizes smaller than β in advance, see [12, 4]. Since BKZ with smaller block sizes is considerably cheaper, we do not consider the BKZ costs with smaller block sizes in our conservative security estimates. Furthermore, for our conservative security estimates, we assume that the number of rounds can be decreased to one, giving

Estimatecons(β,n)=0.187281βlog(β)-1.0192β+log(n+1-β)+16.1.

Runtime optimization.

The optimization of the total runtime Ttotal(δ,r) is performed in the following way. For each possible r, we find the optimal δr that minimizes the runtime Ttotal(δ,r). Consequently, the optimal runtime is given by min{Ttotal(δr,r)}, the smallest of those minimized runtimes. Note that for fixed r, the optimal δr for our conservative security estimates can easily be found in the following way. For fixed r, the function Tred,cons(δ,r) is monotonically decreasing in δ, and the function Thyb,cons(δ,r)psucc(δ,r) is monotonically increasing in δ. Therefore, Ttotal,cons(δ,r) is (close to) optimal when both those functions are balanced, i.e., take the same value. Thus the optimal δr can, for example, be found by a simple binary search.

For our standard security estimates, we assume the function Tred,std(δ,r)psucc(δ,r) is monotonically decreasing in δ in the relevant range, hence the optimal Ttotal,std(δ,r) can be found by balancing the functions Tred,std(δ,r)psucc(δ,r) and Thyb,std(δ,r)psucc(δ,r) as above. Note that this assumption might note be true, but it surely leads to upper bounds on the optimal runtime of the attack.

4.4 Typical flaws in previous analyses of the hybrid attack

We end this section by listing some typical over-simplifications which can be found in previous analyses of the hybrid attack. We remark that some simplifying assumptions lead to overestimating the security of the schemes and others to underestimating it. In some analyses, both types occurred at the same time and somewhat magically almost canceled out each others effect on the security estimates for some parameter sets.

Ignoring the probability p

One of the most frequently encountered simplifications that appeared in several works is the lack of a (correct) calculation of the probability p defined in Assumption 1. As can be seen in Heuristic 1, this probability plays a crucial role in the runtime analysis of the attack. Nevertheless, in several works [21, 14, 18, 30, 8], the authors ignore the presence of this probability by setting p=1 for the sake of simplicity. However, even though we took the probability into account when optimizing the attack parameters[4], for the parameter sets we analyze in Section 5, the probability p was sometimes as low as 2-80, see Table 4. Note that the wrong assumption p=1 gives more power to the attacker, since it assumes that collisions can always be detected by the attacker, although this is not the case, resulting in security underestimates. We also remark that in some works, the probability p is not completely ignored but determined in a purely experimental way [20] or calculated using additional assumptions [17].

Unrealistic demands on the basis reduction

In most works [20, 21, 17, 14, 18, 30, 8], the authors demand a sufficiently good basis reduction such that the nearest plane algorithm must unveil the searched short vector (or at least with very high probability). To be more precise, [20, Lemma 1] is used to determine what sufficiently good exactly means. In our opinion, this demand is unrealistic, and instead we account for the probability of this event in the success probability, which reflects the attacker’s power in a more accurate way. In addition, we note that in most cases, [20, Lemma 1] is not applicable the way it is claimed in several works. We briefly sketch why this is the case. Often, [20, Lemma 1] is applied to determine the necessary quality of a reduced basis such that the nearest plane (on correct input) unveils a vector 𝐯 of infinity norm at most y. However, this lemma is only applicable if the basis matrix is in triangular form, which is not the case is general. Therefore, one needs to transform the basis with an orthonormal matrix 𝐘 in order to obtain a triangular basis. This basis however does not span the same lattice but an isomorphic one, which contains the transformed vector 𝐯𝐘, but (in general) not the vector 𝐯. While the transformation 𝐘 preserves the Euclidean norm of the vector 𝐯, it does not preserve its infinity norm. Therefore, the lemma cannot be applied with the same infinity norm bound y, which is done in most works. In fact, in the worst case, the new infinity norm bound can be up to my, where m is the lattice dimension. In consequence, one would have to apply [20, Lemma 1] with infinity norm bound my instead of y in order to get a rigorous statement, which demands a much better basis reduction. This problem is already mentioned – but not solved – in [30]. Note that the worst case, where (i) the vector 𝐯 has Euclidean norm my, and (ii) all the weight of the transformed vector is on one coordinate such that my is a tight bound on the infinity norm after transformation, is highly unlikely. In the following, we give an example to illustrate the different success conditions for the nearest plane algorithm.

Example.

Let d=512 and q=1024. We consider the nearest plane algorithm on a BDD instance 𝐭Λ+𝐞 in a d-dimensional lattice Λ of determinant qd/2, where 𝐞 is a random binary vector. Naively applying [20, Lemma 1] with infinity norm bound 1 would suggest that a basis reduction of quality δ11.0068 is sufficient to recover 𝐞. Applying the cost model used for our conservative security estimates described in Section 4.3, this would take roughly T1291 operations. However, as described above, the lemma cannot be applied with that naive bound. Instead, using the worst case bound my on the infinity norm and applying [20, Lemma 1] would lead to a basis reduction of quality δ21.0007, taking roughly T22357 operations to guarantee the success of the nearest plane algorithm. This shows the impracticality of this approach. Instead, taking the success probability of the nearest plane algorithm into account, as done in this work, one can achieve the following results. Assuming that the Euclidean norm of a random binary vector is roughly 𝐞m2, one can balance the quality of the basis reduction and the success probability of the nearest plane algorithm to obtain the optimal trade-off δ31.0067, taking roughly T3294 operations, with a success probability of roughly 2-31.

Missing or incorrect optimization

In some works such as [20, 14], the optimization of the attack parameters is either completely missing, ignoring the fact that there is a trade-off between the time spent on basis reduction and the actual attack, or incorrect. As a result, one only obtains upper bounds on the estimated security level but not precise estimates.

Other inaccuracies

Further inaccuracies we encountered include the following:

  1. Implicitly assuming that the meet-in-the-middle part 𝐯g of the short vector has the right number of i-entries for each i [21, 14, 18, 30, 8]. This is not the case in general and therefore needs to be accounted for in the success probability.

  2. Simplifying the structure of the secret key when convenient in order to ease the analysis [18, 30]. This can drastically change the norm of the secret vector and in consequence manipulate the runtime estimates.

  3. Assuming that an attacker could maybe utilize some algebraic structure without any evidence that this is the case [17, 18, 30]. This assumption results in security underestimates if the assumption is in fact wrong.

5 Updating security estimates against the hybrid attack

In this section, we apply our improved analysis of the hybrid attack to various cryptographic schemes in order to reevaluate their security and derive updated security estimates. The section is structured as follows: Each scheme is analyzed in a separate subsection. We begin with subsections on the encryption schemes NTRU, NTRU Prime and R-BinLWEEnc, and end with subsections on the signature schemes BLISS and GLP. In each subsection, we first give a brief introduction to the scheme. We then apply the hybrid attack to the scheme and analyze its complexity according to Section 4. This analysis is performed with the following four steps:

  1. Constructing the lattice. We first construct a lattice of the required form which contains the secret key as a short vector.

  2. Determining the attack parameters. We find suitable attack parameters ci (depending on the meet-in-the-middle dimension r), infinity norm bounds y and k, and estimate the Euclidean Y.

  3. Determining the success probability. We determine the success probability of the attack according to Section 4.2.

  4. Optimizing the runtime. We optimize the runtime of the attack for our standard and conservative security estimates according to Section 4.3.

We end each subsection by providing a table of updated security estimates against the hybrid attack obtained by our analysis. In the tables, we also provide the optimal attack parameters (δr,r) derived by our optimization process and the corresponding probability p with which collisions can be detected. For comparison, we further provide the security estimates of the previous works. In our runtime optimization of the attack, we optimized with a precision of up to one bit. As a result, there may not be one unique optimal attack parameter pair (δr,r), and for the table, we simply pick one that minimizes the runtime (up to one bit precision).

5.1 NTRU

The NTRU encryption system was officially introduced in [19] and is one of the most important lattice-based encryption schemes today due to its high efficiency. The hybrid attack was first developed to attack NTRU [20] and has been applied to various proposed parameter sets since [20, 21, 17, 18, 30]. In this work, we restrict our studies to the NTRU EESS # 1 parameter sets given in [18, Table 3].

Constructing the lattice

The NTRU cryptosystem is defined over the ring Rq=q[X]/(XN-1), where N,q, and N is prime. The parameters N and q are public. Furthermore, there exist public parameters d1,d2,d3,dg. For the parameter sets considered in [18], the private key is a pair of polynomials (f,g)Rq2, where g is a trinary polynomial with exactly dg+1 ones and dg minus ones and f=1+3F invertible in Rq with F=A1A2+A3 for some trinary polynomials Ai with exactly di one and di minus one entries. The corresponding public key is (1,h), where h=f-1g. In the following, we assume that h and 3 are invertible in Rq. We further identify polynomials with their coefficient vectors. We can recover the private key by finding the secret vector 𝐯=(𝐅,𝐠)t.[5] Since h=(1+3F)-1g, we have 3-1h-1g=F+3-1, and therefore it holds that

𝐯+(𝟑-1𝟎)=(3-1𝐡-1𝐠+q𝐰𝐠)=(q𝐈n3-1𝐇¯𝟎𝐈n)(𝐰𝐠)

for some 𝐰n, where 𝐇¯ is the rotation matrix of 𝐡-1. Hence, 𝐯 can be recovered by solving the BDD on input (-𝟑-1,𝟎)t in the q-ary lattice

Λ=Λ((q𝐈n3-1𝐇¯𝟎𝐈n)),

since (-𝟑-1,𝟎)t-𝐯Λ.[6] A similar way to recover the private key was already mentioned in [30]. The lattice Λ has dimension 2n and determinant qn. Since we take the BDD approach for the hybrid attack, we assume that only 𝐯, not its rotations or additive inverse, can be found by the attack, see Section 3. Hence, we assume that the set S, as defined in Heuristic 1, consists of at most one element.

Determining the attack parameters

Let 𝐯=(𝐅,𝐠)t=(𝐯l,𝐯g)t with 𝐯l2n-r and 𝐯gr. Since 𝐠 is a trinary vector, we can set the infinity norm bound k on 𝐯g equal to one. In contrast, determining an infinity norm bound on the vector 𝐯l is not that trivial, since 𝐅 is not trinary but of product form. For a specific parameter set, this can either be done theoretically or experimentally. The same holds for estimating the Euclidean norm of 𝐯l. For our runtime estimates, we determined the expected Euclidean norm of 𝐅 experimentally and set the expected Euclidean norm of 𝐯l to

𝐯l𝐅2+n-rn(2dg+1).

We set 2c-1=rn(dg+1) and 2c1=rndg to be equal to the expected number of minus one entries and one entries, respectively, in 𝐠.[7] For simplicity, we assume that c-1 and c1 are integers in the following in order to avoid writing down the rounding operates.

Determining the success probability

The next step is to determine the success probability psucc, i.e., the probability that 𝐯 has exactly 2c-1 entries equal to minus one, 2c1 entries equal to one, and NP𝐁(𝐂𝐯g)=𝐯l holds, where 𝐁 is as given in Heuristic 1. Assuming independence, the success probability is approximately

psuccpcpNP,

where pc is the probability that 𝐯 has exactly 2c-1 entries equal to minus one and 2c1 entries equal to one, and pNP is defined and calculated as in Section 4.2. Obviously, pc is given by

pc=(r2c~0,2c-1,2c1)(n-rd0-2c~0,dg-2c-1,dg+1-2c1)(nd0,dg,dg+1),

where 2c~0=r-2c-1-2c1 and d0=n-(dg+1)-dg. As explained earlier, since we use the BDD approach of the hybrid attack, we assume that |S|=1 in case the attack is successful.

Optimizing the runtime

We determined the optimal attack parameters to estimate the minimal runtime of the hybrid attack for the NTRU EESS # 1 parameter sets given in [18, Table 3]. The results, including the optimal r, corresponding δr and resulting probability p that collisions can be found, are presented in Table 3. Our analysis shows that the security levels against the hybrid attack claimed in [18] are lower than the actual security levels for all parameter sets. In addition, our results show that for all of the analyzed parameter sets, the hybrid attack does not perform better than a purely combinatorial meet-in-the-middle search, see [18, Table 3]. Our results therefore disprove the common claim that the hybrid attack is necessarily the best attack on NTRU.

Table 3

Optimal attack parameters and security levels against the hybrid attack for NTRU.

Parameter set𝐧=401𝐧=439𝐧=593𝐧=743
Optimal rcons/rstd104/122122/140206/219290/308
Optimal δr,cons1.005441.005091.004121.00352
Optimal δr,std1.005521.005181.004201.00357
Corresp. psucc cons/std2-70/2-432-56/2-472-67/2-622-78/2-69
Security cons/std in bits145/162165/182249/267335/354
In [18] cons/std116/127133/145204/236280/330
rcons/rstd used in [18]154/166175/192264/303360/423

5.2 NTRU Prime

The NTRU Prime encryption scheme was recently introduced [8] in order to eliminate worrisome algebraic structures that exist within NTRU [19] or Ring-LWE based encryption schemes such as [25, 3].

Constructing the lattice

The Streamlined NTRU Prime family of cryptosystems is parameterized by three integers (n,q,t)3, where n and q are odd primes. The base ring for Streamlined NTRU Prime is Rq=q[X]/(Xn-X-1). The private key is (essentially) a pair of polynomials (g,f)Rq2, where g is drawn uniformly at random from the set of all trinary polynomials, and f is drawn uniformly at random from the set of all trinary polynomials with exactly 2t non-zero coefficients. The corresponding public key is h=g(3f)-1Rq. In the following, we identify polynomials with their coefficient vectors. As described in [8], the secret vector 𝐯=(𝐠,𝐟) is contained in the q-ary lattice

Λ=Λ((q𝐈n3𝐇𝟎𝐈n)),

where 𝐇 is the rotation matrix of h, since

(q𝐈n3𝐇𝟎𝐈n)(𝐰𝐟)=(q𝐰+3𝐡𝐟𝐟)=(𝐠𝐟)=𝐯

for some 𝐰n. The determinant of the lattice Λ is given by qn, and its dimension is equal to 2n. Note that in the case of Streamlined NTRU Prime, the rotations of a trinary polynomial are not necessarily trinary, but it is likely that some are. The authors of [8] conservatively assume that the maximum number of good rotations of 𝐯 that can be utilized by the attack is n-t, which we also assume in the following. Counting their additive inverses leaves us with 2(n-t) short vectors that can be found by the attack.

Determining the attack parameters

Let 𝐯=(𝐟,𝐠)t=(𝐯l,𝐯g)t with 𝐯l2n-r and 𝐯gr. Since 𝐯 is trinary, we can set the infinity norm bounds y and k equal to one. The expected Euclidean norm of 𝐯l is given by

𝐯l23n+n-rn2t.

We set 2c1=2c-1=rnt2 equal to the expected number of one entries (or minus one entries, respectively) in 𝐟. For simplicity, we assume that c1 is an integer in the following.

Determining the success probability

Next, we determine the success probability psucc=Pr[S], where S denotes the following subset of the lattice Λ:

S={𝐰Λ𝐰=(𝐰l,𝐰g)twith𝐰l{0,±1}2n-r,𝐰g{0,±1}r,exactly 2cientries of𝐰gequal toifor alli{-1,1},NP𝐁(𝐂𝐰g)=𝐰l},

where 𝐁 is as defined in Heuristic 1. We assume that S is a subset of all the rotations of 𝐯 that can be utilized by the attack and their additive inverses. In particular, we assume that S has at most 2(n-t) elements. Note that if some vector 𝐰 is contained in S, then we also have -𝐰S. Assuming independence, the probability pS that 𝐯S is approximately given by

pS(r2c~0,2c-1,2c1)(n-r2t-4c1)22t-4c1(n2t)22tpNP,

where d0=n-2t and 2c~0=r-4c1, and pNP is defined and calculated as in Section 4.2. Assuming independence, all of the n-t good rotations of 𝐯 are contained in S with probability pS as well. Therefore, the probability psucc that we have at least one good rotation is approximately

psucc=Pr[S]1-(1-pS)n-t.

Next, we estimate the size of the set S in the case S, i.e., Algorithm 1 is successful. In that case, at least one rotation is contained in S. Then also its additive inverse is contained in S, hence |S|2. We can estimate the size of S in case of success to be

|S|2+2(n-t-1)pS,

where pS is defined as above.

Optimizing the runtime

We applied our new techniques to estimate the minimal runtimes for several NTRU Prime parameter sets proposed in [8, Appendix D]. Besides the “case study parameter set”, for our analysis, we picked one parameter set that offers the lowest bit security and one that offers the highest according to the analysis of [8]. Our resulting security estimates are presented in Table 4. Our analysis shows that the authors of [8] underestimate the security of their scheme for all parameter sets we evaluated.

Table 4

Optimal attack parameters and security levels against the hybrid attack for NTRU Prime.

Parameter set𝐧=607𝐧=739𝐧=929
𝐪=18749𝐪=9829𝐪=12953
Optimal rcons/rstd148/162235/257328/353
Optimal δr,cons1.004661.004051.00346
Optimal δr,std1.004661.004071.00346
Corresponding psucc cons/std2-63/2-542-73/2-602-80/2-65
Security cons/std in bits197/211258/273346/363
In [8]128228310

5.3 R-BinLWEEnc

In [9], Buchmann et al. presented R-BinLWEEnc, a lightweight public key encryption scheme based on binary Ring-LWE[8]. To determine the security of their scheme, the authors evaluate the hardness of binary LWE against the hybrid attack using the methodology of [10].

Constructing the lattice

Let m,n,q with m>n and (𝐀,𝐛=𝐀𝐬+𝐞modq) be a binary LWE instance with 𝐀qm×n, 𝐬qn and binary error 𝐞{0,1}.[9] To obtain a more efficient attack, we first subtract the vector (0.5,,0.5,0,,0) with m-r non-zero and r zero entries from both sides of the equation 𝐛=𝐀𝐬+𝐞modq to obtain a new LWE instance (𝐀,𝐛=𝐀𝐬+𝐞modq), where 𝐞{±0.5}m-r×{0,1}r. This way, the expected norm of the first m-r entries is reduced while the last r entries, which are guessed during the attack, remain unchanged. In the following, we only consider this transformed LWE instance with smaller error. Obviously, the vector (𝐞,1) is contained in the q-ary lattice

Λ=Λq(𝐀)={𝐯m+1there exists𝐰n+1such that𝐯=𝐀𝐰modq},

where

𝐀=(𝐀𝐛𝟎1)q(m+1)×(n+1).

Note that constructing the lattice this way, we only need the error vector 𝐞 to be binary and not also the secret 𝐬 as in [9, 10]. The dimension of the lattice Λ is equal to m+1, and with high probability, its determinant is qm-(n+1), see, for example, [6]. However, as we know the last component of (𝐞,1), it does not need to be guessed, and we may hence ignore it for the hybrid attack and consider the lattice to be of dimension m.

Determining the attack parameters

Let 𝐯=𝐞=(𝐯l,𝐯g)t with 𝐯l{±12}m-r and 𝐯g{±12}r. Then obviously, we have 𝐯12, so we set the infinity norm bounds y=k=12. Since 𝐯l is a uniformly random vector in {±12}m-r, the expected Euclidean norm of 𝐯l is

𝐯lm-r4.

We set 2c-1/2=2c1/2=r2 to be the expected number of -12 and 12 entries of 𝐯g. In the following, we assume that c-1/2=c1/2 is an integer in order to not have to deal with rounding operators.

Determining the success probability

We can approximate the success probability psucc by psuccpcpNP, where pc is the probability that 𝐯g has exactly 2c-1/2 entries equal to -12 and 2c1/2 entries equal to 12, and pNP is defined as in Section 4.2. Using the fact that 2c-1/2+2c1/2=r, we therefore obtain

psuccpcpNP=2-r(r2c1/2)pNP.

We assume that if the attack is successful, then |S|=2, where S is defined as in Heuristic 1, since 𝐞 and -𝐞 are assumed to be the only vectors that can be found by the attack.

Optimizing the runtime

We reevaluated the security of the R-BinLWEEnc parameter sets proposed in [9]. Our security estimates, the optimal attack parameters r and δr, and the corresponding probability p are presented in Table 5. The original security estimates given in [9] are within the security range we determined.

Table 5

Optimal attack parameters and security levels against the hybrid attack for R-BinLWEEnc.

Parameter setSet-ISet-IISet-III
Optimal rcons/rstd104/11688/96276/268
Optimal δr,cons1.006911.007311.00478
Optimal δr,std1.006981.007411.00487
Corresponding psucc cons/std2-33/2-272-31/2-282-38/2-43
Security cons/std in bits88/9979/90187/197
In [9]9484190

5.4 BLISS

The signature scheme BLISS, introduced in [14], is one of the most important lattice-based signature schemes. In the original paper, the authors considered the hybrid attack on their signature scheme for their security estimates; however, their analysis is rather vague.

Constructing the lattice

In the BLISS signature scheme, the setup is the following: Let n be a power of two, d1,d2 such that d1+d2n holds, q a prime modulus with q1mod2n, and q=q[x]/(xn+1). The signing key is of the form (s1,s2)=(f,2g+1), where fq×,gq, each with d1 coefficients in {±1} and d2 coefficients in {±2}, and the remaining coefficients equal to 0. The public key is essentially a=s2s1q. We assume that a is invertible in q, which is the case with very high probability. Hence, we obtain the equation s1=s2a-1q, or equivalently f=2ga-1+a-1modq. In the following, we identify polynomials with their coefficient vectors.

In order to recover the signing key, it is sufficient to find the vector 𝐯=(𝐟,𝐠)t. Similar to our previous analysis of NTRU in Section 5.1, we have that

𝐯+(-𝐚-1𝟎)=(2𝐠𝐚-1+q𝐰𝐠)=(q𝐈n2𝐀𝟎𝐈n)(𝐰𝐠)

for some 𝐰n, where 𝐀 is the rotation matrix of 𝐚-1. Hence, 𝐯 can be recovered by solving the BDD on input (𝐚-1,𝟎)t in the q-ary lattice

Λ=Λ((q𝐈n2𝐀𝟎𝐈n)),

since (𝐚-1,𝟎)t-𝐯Λ. The determinant of the lattice Λ is qn, and its dimension is equal to 2n.

Determining the attack parameters

In the following, let 𝐯=(𝐟,𝐠)t=(𝐯l,𝐯g)t with 𝐯lm-r and 𝐯gr. Since we are using the hybrid attack to solve a BDD problem, the rotations of 𝐯 cannot be utilized in the attack (or at least it is not known how), see Section 3. We therefore assume that 𝐯 is the only rotation useful in the attack, i.e., that the set of good rotations S contains at most 𝐯. The first step is to determine proper bounds y on 𝐯l and k on 𝐯g and find suitable guessing parameters ci. By construction, we obviously have 𝐯2, thus we can set the infinity norm bounds y=k=2. The expected Euclidean norm of 𝐯l is given by

𝐯ld1+4d2+n-rn(1d1+4d2).

We set 2ci equal to the expected number of i-entries in 𝐯g, i.e., c-2=c2=rn14d2 and c-1=c1=rn14d1. For simplicity, we assume that c1 and c2 are integers in the following.

Determining the success probability

Next, we determine the success probability psucc, which is the probability that NP𝐁(𝐂𝐯g)=𝐯l and exactly 2ci entries of 𝐯g are equal to i for i{±1,,±k}. The probability pc that exactly 2ci entries of the vector 𝐯g are equal to i for all i{±1,,±k} is given by

(r2c~0,2c-2,2c2,2c-4,2c4)(n-rd0-2c~0,d1-4c2,d2-4c4)2d1+d2-4(c2+c4)(nd0,d1,d2)2d1+d2,

where d0=n-d1-d2 and 2c~0=r-2(c-2+c2+c-4+c4). Assuming independence, the success probability is approximately given by

psuccpcpNP,

where pNP is defined as in Section 4.2. As explained earlier, we assume that S{𝐯}; so if Algorithm 1 is successful, we have |S|=1.

Optimizing the runtime

We performed the optimization process for the BLISS parameter sets proposed in [14]. The results are presented in Table 6. Besides the security levels against the hybrid attack, we provide the optimal attack parameters r and δr leading to a minimal runtime of the attack, as well as the probability p. Our results show that the security estimates for the BLISS-I, BLISS-II, and BLISS-III parameter sets given in [14] are within the range of security we determined, whereas the BLISS-IV parameter set is less secure than originally claimed. In addition, the authors of [14] claim that there are at least 17 bits of security margins built into their security estimates, which is incorrect for all parameter sets according to our analysis.

Table 6

Optimal attack parameters and security levels against the hybrid attack for BLISS.

Parameter setBLISS-IBLISS-IIBLISS-IIIBLISS-IV
Optimal rcons/rstd152/152152/152109/14499/137
Optimal δr,cons1.005881.005881.005321.00518
Optimal δr,std1.006001.006001.005411.00524
Corresp. psucc cons/std2-35/2-382-35/2-382-58/2-402-67/2-44
Security cons/std in bits124/139124/139152/170160/182
In [14]128128160192
r used in [14]194194183201

5.5 GLP

The GLP signature scheme was introduced in [16]. In the original work, the authors did not consider the hybrid attack when deriving their security estimates. Later, in [14], the hybrid attack was also applied to the GLP-I parameter set. The GLP-II parameter set has not been analyzed regarding the hybrid attack so far.

Constructing the lattice

For the GLP signature scheme the setup is the following: Let n be a power of two, q a prime modulus with q1mod2n, and q=q[x]/(xn+1). The signing key is of the form (s1,s2), where s1 and s1 are sampled uniformly at random among all polynomials of q with coefficients in {-1,0,1}. The corresponding public key is then of the form (a,b=as1+s2)q2, where a is drawn uniformly at random in q. So we know that 0=-b+as1+s2. Identifying polynomials with their coefficient vectors, we therefore have that

𝐯:-(-1𝐬1𝐬2)Λ:-Λq(𝐀)={𝐰2n+1𝐀𝐰𝟎modq}2n+1,

where 𝐀=(𝐛|rot(𝐚)|𝐈n), and rot(𝐚) is the rotation matrix of 𝐚. Because of how the lattice is constructed, we do not assume that rotations of 𝐯 can by utilized by the attack.[10] Therefore, with very high probability, 𝐯 and -𝐯 are the only non-zero trinary vectors contained in Λ, which we assume in the following. Since q is prime and 𝐀 has full rank, we have that detΛ=qn, see for example [7]. In Appendix A.1, we show how to construct a basis of the form

𝐁=(q𝐈n𝟎𝐈n+1)(2n+1)×(2n+1)

for the q-ary lattice Λ.

Determining the attack parameters

Ignoring the first -1 coordinate, the short vector 𝐯 is drawn uniformly from {-1,0,1}2n+1. Let 𝐯=(𝐯l,𝐯g)t with 𝐯lm-r and 𝐯gr. Then obviously, 𝐯l1 and 𝐯g1 hold; so we can set the infinity norm bounds y and k equal to one. The expected Euclidean norm of 𝐯l is approximately

𝐯l2(m-r)/3.

We set 2c-1=2c1=r3 to be the expected number of ones and minus ones. For simplicity, we assume that c-1=c1 is an integer in the following.

Determining the success probability

The success probability psucc of the attack is approximately psuccpcpNP, where pc is the probability that 𝐯g hat exactly 2c-1 minus one entries and 2c1 one entries, and pNP is defined as in Section 4.2. Calculating pc yields

psuccpcpNP=3-r(rr3,r3,r3)pNP.

As previously mentioned, we assume that if the attack is successful, then |S|=2.

Optimizing the runtime

We performed the optimization for the GLP parameter sets proposed in [16]. The results, including the optimal attack parameters r and δr and the probability p, are shown in Table 7. The security level of the GLP-I parameter set claimed in [14] is within the range of security we determined. In [14], the authors did not analyze the hybrid attack for the GLP-II parameter set. Güneysu et al. [16] claimed a security level of at least 256 bits (not considering the hybrid attack) for the GLP-II parameter set, whereas we show that it offers at most 233 bits of security against the hybrid attack.

Table 7

Optimal attack parameters and security levels against the hybrid attack for GLP.

Parameter setGLP-IGLP-II
Optimal rcons/rstd30/54168/192
Optimal δr,cons1.007761.00450
Optimal δr,std1.007691.00451
Corresponding psucc cons/std2-41/2-252-61/2-49
Security cons/std in bits71/88212/233
In [14, 16]75 to 80256
r used in [14]85

6 Conclusion and future work

In this work, we described a general version of the hybrid attack and presented improved techniques to analyze its runtime. We further reevaluated various cryptographic schemes regarding their security against the hybrid attack. Our analysis shows that several of the old security estimates of previous works were in fact unreliable. By updating these unreliable estimates, we contributed to the trustworthiness of security estimates of lattice-based cryptography.

For future work, we hope that more provable statements about the practicality of the hybrid attack can be derived. For instance, our results show that the hybrid attack is not the best known attack on all NTRU instances as previously thought. It would be interesting to prove that under certain conditions on the key structure, the hybrid attack is always outperformed by some other attack. Another possible line of future work is applying the hybrid attack to a broader range of cryptographic schemes than already done in this work. Furthermore, analyzing the memory requirements of the hybrid attack was out of the scope of this work. In future research, the memory requirements can be analyzed and reduced (as for example done in [32] for binary NTRU keys). If required, the memory consumption can then be taken into account when optimizing the attack parameters for the hybrid attack.


Communicated by Rainer Steinwandt


Award Identifier / Grant number: CRC 1119

Funding statement: This work has been co-funded by the DFG as part of project P1 within the CRC 1119 CROSSING.

A Appendix: On q-ary lattices

A.1 Constructing a basis of the required form

In the following lemma, we show that for q-ary lattices, where q is prime, there always exists a basis of the form required for the attack. The size of the identity in the bottom right corner of the basis depends on the determinant of the lattice. In the proof, we also show how to construct such a basis.

Lemma A.1.

Let q be prime, mN, and let ΛZm be a q-ary lattice.

  1. There exists some k,0nm such that det(Λ)=qk.

  2. Let det(Λ)=qk. Then there is a matrix 𝐀qm×(m-k) of rank m-k (over q) such that Λ=Λq(𝐀).

  3. Let det(Λ)=qk and 𝐀=(𝐀1𝐀2) with 𝐀1qk×(m-k),𝐀2q(m-k)×(m-k) be a matrix of rank m-k (over q) such that Λ=Λq(𝐀). If 𝐀2 is invertible over q, then the columns of the matrix

    𝐁=(q𝐈k𝐀1𝐀2-1𝟎𝐈m-k)m×m

    form a basis of the lattice Λ.

Proof.

(i) Obviously, det(Λ)det(qm)=qm, since qmΛ, and therefore det(Λ) is some non-negative power of q, because q is prime.

(ii) We have (m:qm)=(m:Λ)(Λ:qm), and therefore

(Λ:qm)=(m:qm)(m:Λ)=det(qm)det(Λ)=qm-k.

Let 𝐀qm×m be some lattice basis of Λ. Since Λ/qm is in one-to-one correspondence to the q-vector space spanned by 𝐀, this vector space has to be of dimension m-k, and therefore 𝐀 has rank m-k over q. This implies that there is some matrix 𝐀 consisting of m-k columns of 𝐀 such that Λ=Λ(q𝐈m𝐀)=Λq(𝐀).

(iii) By assumption, 𝐀2 is invertible, and thus we have

Λ={𝐯mthere exists a𝐰(m-k)such that𝐯=𝐀𝐰modq}={𝐯m|there exists a𝐰(m-k)such that𝐯=(𝐀1𝐀2)𝐀2-1𝐰modq}={(𝐀1𝐀2-1𝐈m-k)𝐰|𝐰(m-k)}+qm.

Therefore, the columns of the matrix

(q𝐈m|𝐀1𝐀2-1𝐈m-k)m×(m+(m-k))

form a generating set of the lattice Λ, which can be reduced to the basis 𝐁. ∎

A.2 Modifying the GSA for q-ary lattices

Typically, the Gram–Schmidt lengths obtained after performing a basis reduction with quality δ can be approximated by the geometric series assumption (GSA), see Section 2. However, for q-ary lattices of the above form, this assumption may be modified. This has already been considered and confirmed with experimental results in previous works, see for example [20, 17, 18, 30]. However, in this work, we derive simple formulas predicting the quality of the reduction, and therefore explain how to obtain these formulas in more detail. We begin by sketching the reason for modifying the GSA for q-ary lattices, given a lattice basis 𝐁 of the form

𝐁=(q𝐈a𝟎𝐈b)d×d,

where d=a+b. If the basis reduction is not strong enough, i.e., the Hermite delta is too large, the GSA predicts that the first Gram–Schmidt vectors of the reduced basis have norm bigger than q. However, in practice, this will not happen, since in this case, the first vectors will simply not be reduced. This means that instead of reducing the whole basis 𝐁, one can just reduce the last vectors that will actually be reduced. Let k denote the (so far unknown) number of the last vectors that are actually reduced (i.e., their corresponding Gram–Schmidt vectors according to the GSA have norm smaller than q). We assume that the basis reduction is sufficiently weak such that k<d and sufficiently strong such that k>b. We write 𝐁 in the form

𝐁=(q𝐈d-k𝐃𝟎𝐁1)

for some 𝐁1k×k and 𝐃(d-k)×k. Now, instead of 𝐁, we only reduce 𝐁1 to 𝐁1=𝐁1𝐔 for some unimodular 𝐔k×k. This yields a reduced basis

𝐁=(q𝐈d-k𝐃𝐔𝟎𝐁1)

of 𝐁. The Gram–Schmidt basis of this new basis 𝐁 is given by

𝐁¯=(q𝐈d-k𝟎𝟎𝐁¯1).

Therefore, the lengths of the Gram–Schmidt basis vectors 𝐁¯ are q for the first d-k vectors and then equal to the lengths of the Gram–Schmidt basis vectors 𝐁¯1, which are smaller than q. In order to predict the lengths of 𝐁¯, we can apply the GSA to the lengths of the Gram–Schmidt basis vectors 𝐁¯1, since they are actually reduced. What remains is to determine k. Assume we apply a basis reduction on 𝐁1 that results in a reduced basis 𝐁1 of Hermite delta δ. By our construction, we can assume that the first Gram–Schmidt basis vector of 𝐁¯1 has norm roughly equal to q, so the GSA implies

δkdet(Λ(𝐁1))1k=q.

Using the fact that det(Λ(𝐁1))=qk-b and k<d, we can solve for k and obtain

(A.1)k=min(blogq(δ),d).

Summarizing, we expect that after the basis reduction, our Gram–Schmidt basis 𝐁¯1 has lengths R1,,Rd, where

Ri={qifid-k,δ-2(i-(d-k)-1)+kqk-bkelse,

and k is given as in equation (A.1).

Note that it might also happen that the last Gram–Schmidt lengths are predicted to be smaller than 1. In this case, these last vectors will also not be reduced in reality, since the basis matrix has the identity in the bottom right corner. Therefore, in this case, the GSA may be further modified. However, for realistic attack parameters, this phenomenon never occurred during our runtime optimizations, and therefore we do not include it in our formulas and leave it to the reader to do the easy calculations if needed.

A.3 Experimental results for the probability p

In the following, we provide the results of some preliminary experiments supporting the validity of the formula for the collision-finding probability p provided in Assumption 4. To this end, for n=40, n=60 and n=80, we created q-ary lattices Λn that contain a short binary vector by embedding binary LWE instances into an SVP problem according to Section 5.3 (however, we did not shift the first components of the error vector by 0.5). For each n, we chose a random binary LWE instance with LWE parameters n,m=2n,q=521 and created a basis 𝐁𝐧 of the corresponding SVP lattice of the form

𝐁𝐧=(𝐁n𝐂n𝟎𝐈r)(m+1)×(m+1)

with r=28. We then BKZ-reduced the upper-left part 𝐁n of the basis with block size 20. Let (𝐯l(n),𝐯g(n))Λn with 𝐯g(n){0,1}r be the short binary vector in Λn. For each n, we repeated the following experiment 10,000 times and recorded the number of success cases: Guess a vector 𝐰{0,1}r with r4=7 non-zero entries as in the hybrid attack and check if the vector 𝐂𝐰 is 𝐯l(n)-admissible with respect to the basis 𝐁n by checking if NP(𝐂n𝐰)=NP(𝐂n𝐰-𝐯l(n))+𝐯l(n). The experiments were performed using SageMath [31]. For comparison, we calculated the probability p according to the formula provided in Assumption 4 using the actual Gram–Schmidt norms of the basis 𝐁n and the norm of 𝐯l(n) to calculate the ri. The results are presented in Table 8. They suggest that for the analyzed instances, Assumption 4 is a good approximation of the actual probability.

Table 8

Comparison of Assumption 4 and our experimental results for binary LWE instances with LWE parameters n,m=2n,q=521 for n=40, n=60, and n=80.

n=40n=60n=80
p according to Assumption 42-0.3992-1.7092-3.857
p according to experiments2-0.4002-1.5632-4.088

Acknowledgements

We thank Florian Göpfert and John Schanck for helpful discussions and comments.

References

[1] M. R. Albrecht, R. Fitzpatrick and F. Göpfert, On the efficacy of solving LWE by reduction to unique-SVP, Information Security and Cryptology—ICISC 2013, Lecture Notes in Comput. Sci. 8565, Springer, Cham (2014), 293–310. 10.1007/978-3-319-12160-4_18Search in Google Scholar

[2] M. R. Albrecht, R. Player and S. Scott, On the concrete hardness of learning with errors, J. Math. Cryptol. 9 (2015), no. 3, 169–203. 10.1515/jmc-2015-0016Search in Google Scholar

[3] E. Alkim, L. Ducas, T. Pöppelmann and P. Schwabe, Post-quantum key exchange - A new hope, Proceedings of the 25th USENIX Security Symposium, USENIX, Berkeley (2016), 327–343. Search in Google Scholar

[4] Y. Aono, Y. Wang, T. Hayashi and T. Takagi, Improved progressive BKZ algorithms and their precise cost estimation by sharp simulator, Advances in Cryptology—EUROCRYPT 2016. Part I, Lecture Notes in Comput. Sci. 9665, Springer, Berlin (2016), 789–819. 10.1007/978-3-662-49890-3_30Search in Google Scholar

[5] L. Babai, On Lovász’ lattice reduction and the nearest lattice point problem, Annual Symposium on Theoretical Aspects of Computer Science—STACS 85, Lecture Notes in Comput. Sci. 182, Springer, Berlin, (1985) 13–20. 10.1007/BFb0023990Search in Google Scholar

[6] S. Bai and S. D. Galbraith, Lattice decoding attacks on binary LWE, Information Security and Privacy—ACISP 2014, Lecture Notes in Comput. Sci. 8544, Springer, Berlin (2014), 322–337. 10.1007/978-3-319-08344-5_21Search in Google Scholar

[7] D. J. Bernstein, J. Buchmann and E. Dahmen, Post-Quantum Cryptography, Springer, Berlin, 2009. 10.1007/978-3-540-88702-7Search in Google Scholar

[8] D. J. Bernstein, C. Chuengsatiansup, T. Lange and C. van Vredendaal, NTRU prime: Reducing attack surface at low cost, Selected Areas in Cryptography—SAC 2017, Lecture Notes in Comput. Sci. 10719, Springer, Cham (2018), 235–260. 10.1007/978-3-319-72565-9_12Search in Google Scholar

[9] J. Buchmann, F. Göpfert, T. Güneysu, T. Oder and T. Pöppelmann, High-performance and lightweight lattice-based public-key encryption, Proceedings of the 2nd ACM International Workshop on IoT Privacy, Trust, and Security, ACM, New York (2016), 2–9. 10.1145/2899007.2899011Search in Google Scholar

[10] J. Buchmann, F. Göpfert, R. Player and T. Wunderer, On the hardness of LWE with binary error: Revisiting the hybrid lattice-reduction and meet-in-the-middle attack, Progress in Cryptology—AFRICACRYPT 2016, Lecture Notes in Comput. Sci. 9646, Springer, Cham (2016), 24–43. 10.1007/978-3-319-31517-1_2Search in Google Scholar

[11] R. Canetti and J. A. Garay, Advances in Cryptology—CRYPTO 2013. Part I, Lecture Notes in Comput. Sci. 8042, Springer, Heidelberg, 2013. 10.1007/978-3-642-40084-1Search in Google Scholar

[12] Y. Chen, Réduction de réseau et sécurité concrete du chiffrement completement homomorphe, PhD thesis, Paris 7, 2013. Search in Google Scholar

[13] Y. Chen and P. Q. Nguyen, BKZ 2.0: Better lattice security estimates, Advances in Cryptology—ASIACRYPT 2011, Lecture Notes in Comput. Sci. 7073, Springer, Heidelberg (2011), 1–20. 10.1007/978-3-642-25385-0_1Search in Google Scholar

[14] L. Ducas, A. Durmus, T. Lepoint and V. Lyubashevsky, Lattice signatures and bimodal Gaussians, Advances in Cryptology—CRYPTO 2013. Part I, Lecture Notes in Comput. Sci. 8042, Springer, Heidelberg (2013), 40–56. 10.1007/978-3-642-40041-4_3Search in Google Scholar

[15] M. Fischlin and J.-S. Coron, Advances in Cryptology—EUROCRYPT 2016. Part I, Lecture Notes in Comput. Sci. 9665, Springer, Berlin, 2016. 10.1007/978-3-662-49890-3Search in Google Scholar

[16] T. Güneysu, V. Lyubashevsky and T. Pöppelmann, Practical lattice-based cryptography: A signature scheme for embedded systems, Cryptographic Hardware and Embedded Systems—CHES 2012, Lecture Notes in Comput. Sci. 7428, Springer, Berlin (2012), 530–547. 10.1007/978-3-642-33027-8_31Search in Google Scholar

[17] P. S. Hirschhorn, J. Hoffstein, N. Howgrave-Graham and W. Whyte, Choosing NTRUencrypt parameters in light of combined lattice reduction and MITM approaches, Applied Cryptography and Network Security—ACNS 2009, Lecture Notes in Comput. Sci. 5536, Springer, Berlin (2009), 437–455. 10.1007/978-3-642-01957-9_27Search in Google Scholar

[18] J. Hoffstein, J. Pipher, J. M. Schanck, J. H. Silverman, W. Whyte and Z. Zhang, Choosing parameters for 𝙽𝚃𝚁𝚄𝙴𝚗𝚌𝚛𝚢𝚙𝚝, Topics in Cryptology—CT-RSA 2017, Lecture Notes in Comput. Sci. 10159, Springer, Cham (2017), 3–18. 10.1007/978-3-319-52153-4_1Search in Google Scholar

[19] J. Hoffstein, J. Pipher and J. H. Silverman, NTRU: A ring-based public key cryptosystem, Algorithmic Number Theory (Portland 1998), Lecture Notes in Comput. Sci. 1423, Springer, Berlin (1998), 267–288. 10.1007/BFb0054868Search in Google Scholar

[20] N. Howgrave-Graham, A hybrid lattice-reduction and meet-in-the-middle attack against NTRU, Advances in Cryptology—CRYPTO 2007, Lecture Notes in Comput. Sci. 4622, Springer, Berlin (2007), 150–169. 10.1007/978-3-540-74143-5_9Search in Google Scholar

[21] N. Howgrave-Graham, A hybrid lattice-reduction and meet-in-the-middle attack against NTRU, Advances in Cryptology—CRYPTO 2007, Lecture Notes in Comput. Sci. 4622, Springer, Berlin (2007), 150–169. 10.1007/978-3-540-74143-5_9Search in Google Scholar

[22] N. Howgrave-Graham, J. H. Silverman and W. Whyte, A meet-in-the-middle attack on an NTRU private key, https://www.securityinnovation.com/uploads/Crypto/NTRUTech004v2.pdf. Search in Google Scholar

[23] A. K. Lenstra and E. R. Verheul, Selecting cryptographic key sizes, J. Cryptology 14 (2001), no. 4, 255–293. 10.1007/s00145-001-0009-4Search in Google Scholar

[24] R. Lindner and C. Peikert, Better key sizes (and attacks) for LWE-based encryption, Topics in Cryptology—CT-RSA 2011, Lecture Notes in Comput. Sci. 6558, Springer, Heidelberg (2011), 319–339. 10.1007/978-3-642-19074-2_21Search in Google Scholar

[25] V. Lyubashevsky, C. Peikert and O. Regev, On ideal lattices and learning with errors over rings, J. ACM 60 (2013), no. 6, Article ID 43. 10.1007/978-3-642-13190-5_1Search in Google Scholar

[26] D. Micciancio and C. Peikert, Hardness of SIS and LWE with small parameters, Advances in Cryptology—CRYPTO 2013. Part I, Lecture Notes in Comput. Sci. 8042, Springer, Heidelberg (2013), 21–39. 10.1007/978-3-642-40041-4_2Search in Google Scholar

[27] D. Micciancio and M. Walter, Practical, predictable lattice basis reduction, Advances in Cryptology—EUROCRYPT 2016. Part I, Lecture Notes in Comput. Sci. 9665, Springer, Berlin (2016), 820–849. 10.1007/978-3-662-49890-3_31Search in Google Scholar

[28] F. W. J. Olver, NIST Handbook Of Mathematical Functions, Cambridge University Press, Cambridge, 2010. Search in Google Scholar

[29] O. Regev, On lattices, learning with errors, random linear codes, and cryptography, Proceedings of the 37th Annual ACM Symposium on Theory of Computing—STOC’05, ACM, New York (2005), 84–93. 10.1145/1060590.1060603Search in Google Scholar

[30] J. Schanck, Practical lattice cryptosystems: NTRUEncrypt and NTRUMLS, Master’s thesis, University of Waterloo, 2015. Search in Google Scholar

[31] W. Stein, Sage Mathematics Software Version 7.5.1, The Sage Development Team 2017, http://www.sagemath.org. Search in Google Scholar

[32] C. van Vredendaal, Reduced memory meet-in-the-middle attack against the NTRU private key, LMS J. Comput. Math. 19 (2016), no. Suppl. A, 43–57. 10.1112/S1461157016000206Search in Google Scholar

Received: 2016-07-28
Revised: 2018-08-28
Accepted: 2018-09-03
Published Online: 2018-10-05
Published in Print: 2019-03-01

© 2019 Walter de Gruyter GmbH, Berlin/Boston

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 22.11.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jmc-2016-0044/html
Scroll to top button