Startseite Leveled homomorphic encryption schemes for homomorphic encryption standard
Artikel Open Access

Leveled homomorphic encryption schemes for homomorphic encryption standard

  • Shuhong Gao und Kyle Yates EMAIL logo
Veröffentlicht/Copyright: 6. Oktober 2025
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

Homomorphic encryption allows for computations on encrypted data without exposing the underlying plaintext, enabling secure and private data processing in various applications such as cloud computing and machine learning. This article presents a comprehensive mathematical foundation for three prominent homomorphic encryption schemes: Brakerski–Gentry–Vaikuntanathan (BGV), Brakerski–Fan-Vercauteren (BFV), and Cheon–Kim–Kim–Song (CKKS), all based on the ring learning with errors (RLWE) problem. We align our discussion with the functionalities proposed in the recent homomorphic encryption standard, providing detailed algorithms and correctness proofs for each scheme. In addition, we propose improvements to the current schemes focusing on noise management and optimization of public key encryption and leveled homomorphic computation. Our modifications ensure that the noise bound remains within a fixed function for all levels of computation, guaranteeing correct decryption and maintaining efficiency comparable to existing methods. The proposed enhancements reduce ciphertext expansion and storage requirements, making these schemes more practical for real-world applications.

MSC 2010: 94A60

1 Introduction

Homomorphic encryption describes encryption schemes that allow for addition and multiplication operations to be performed on ciphertexts without needing or leaking any information about the secret key or user messages. Furthermore, the operations in the ciphertext space correspond to performing the same operations on the original messages, which can be performed by any third party with knowledge of only the public information. Homomorphic encryption has several modern applications, such as secure cloud computing and private machine learning. With Craig Gentry’s work in 2009 [1], secure homomorphic encryption became viable using ideal lattices. This construction closely relates to the commonly used learning with errors (LWE) problem, with the hardness of LWE being a result proved by Regev in 2005 [2].

Three of the most common modern homomorphic encryption schemes are based on a ring version of LWE problems, known as the ring learning with error (RLWE) problems [3]. These schemes are the Brakerski–Gentry–Vaikuntanathan (BGV) scheme [4,5], the Brakerski–Fan–Vercauteren (BFV) scheme [6], and the Cheon–Kim–Kim–Song (CKKS) scheme [7]. BGV and BFV allow for homomorphic computation for exact arithmetic, while CKKS provides homomorphic computation for numerical computation with certain accuracy. With recent efforts to standardize homomorphic encryption schemes and security [8,9], it is desirable to have concrete and mathematically solid discussions on encryption schemes and homomorphic computing protocols that match the functionalities proposed in the standard, including parameter specifications for efficiency and security.

We should mention that Chillotti et al. [10,11] present a fully homomorphic encryption scheme that can perform one bit operation in less than 0.1 s, while Gao [12] and Case et al. [13] present a fully homomorphic encryption scheme with similar running time but a much smaller ciphertext expansion ( < 20 ). However, each operation in these schemes is prohibitively expensive at the moment. Leveled schemes have a much larger ciphertext expansion, but each operation is much cheaper. A leveled scheme allows for some predetermined number of operations [5,14,15], which is the style we opt for in this article. Several works study noise bounds for homomorphic encryption [14,1620] in both the canonical embedding and infinity norms. These analyses include both theory and implementations. Speedups can be implemented via the residue number system (RNS) [19, 2123], which uses the Chinese remainder theorem to break down computations into smaller rings. The schemes we present in this article can all be implemented using RNS representation.

Our contributions. This article has two main goals. The first goal is to present detailed algorithms for the functionalities proposed in the homomorphic encryption standard [8,9] for each of the BFV, BGV, and CKKS schemes, and to present a detailed correctness proof for all the functionalities. This lays a rigorous mathematical foundation for homomorphic encryption schemes. The second goal is to improve the current schemes for BFV, BGV, and CKKS. We present modified schemes for each of the three schemes, especially in public key encryption and leveled homomorphic computing, and focus on noise control and the worst-case noise bounds, thereby reducing ciphertext expansion and storage expenses. In particular, under the modified schemes, the noise bound for ciphertexts from public key encryption and from homomorphic computing at each level is always bounded by a fixed function ρ , which depends on the underlying ring. The worst-case bound guarantees that ciphertexts can always be decrypted correctly, with no probability of decryption error, which is preferred for many applications. Furthermore, parameter sizes resulting from our worst-case bounds are comparable to parameter sizes from average-case bounds in the literature, thus not degrading the efficiency of the schemes.

Organization of this article. In Section 2, we describe notations and necessary background. We then introduce LWE and RLWE problems. We present and prove two variations of modulus reduction, which is later applied to RLWE-based encryption schemes. In Section 3, we outline three RLWE-based homomorphic encryption schemes: BFV, BGV, and CKKS. For these three schemes, we provide modified encryption to better control noise and conduct a thorough worst-case theoretical noise analysis. In Section 4, we discuss leveled schemes and present techniques in choosing parameters to guarantee homomorphic operations. We also outline operations in RNS here. In Section 5, we give a brief discussion of attack techniques for LWE problems. In Section 6, we provide concluding remarks and further potential research topics. Appendix A contains the proofs of the lemmas on correctness of functionalities for all the algorithms.

2 Notations and preliminaries

2.1 Notations

For a positive integer q , define Z q Z ( q 2 , q 2 ] to be the ring of centered representatives modulo q . For an element v Z , we denote [ v ] q to be the modular reduction of v into the interval Z q such that q divides v [ v ] q . When v is a vector or a polynomial, [ v ] q means reducing each entry or coefficient of v modulo q , respectively. Denote R n as the ring

R n = Z [ x ] ( ϕ ( x ) ) ,

where ϕ ( x ) is a polynomial of degree n and ( ϕ ( x ) ) is the ideal generated by ϕ ( x ) . Often, we will choose ϕ ( x ) to be a power of two cyclotomic polynomial. That is, a polynomial of the form ϕ ( x ) = x n + 1 , where n is a

power of two. For an integer q , we define R n , q as follows:

R n , q = Z q [ x ] ( ϕ ( x ) ) Z [ x ] ( ϕ ( x ) , q ) ,

where ( ϕ ( x ) , q ) is the ideal generated by ϕ ( x ) and q . When v is a polynomial, [ v ] ϕ ( x ) denotes modular reduction of the polynomial into R n . Similarly, when v is a polynomial with integer coefficients, [ v ] ϕ ( x ) , q denotes modular reduction of the polynomial into R n , q , where all the coefficients are in ( q 2 , q 2 ] .

For a vector or polynomial v , the infinity norm of v , denoted v , is the maximum entry or coefficient in absolute value of v . Equivalently, if v = ( a 0 , , a n 1 ) or v = i = 0 n 1 a i x i , then

v = max { a i : i = 0 , , n 1 } .

2 denotes the standard 2-norm. The symbols and will denote floor and ceiling respectively, whereas will denote rounding to the nearest integer, rounding down in the case of a tie. When applying , , or to a polynomial or vector, we mean the rounding of each entry or coefficient. Define the expansion factor δ R of R n as follows:

δ R = max u v u v : u , v R n ,

where u v must be reduced modulo ϕ ( x ) before applying the norm. When ϕ ( x ) = x n + 1 , where n is a power of two, it is well known that δ R = n .

2.2 Noise distributions and learning with errors problems

For a set S , we denote χ ( S ) as an arbitrary probability distribution χ on S . We denote U as the uniform distribution. Let ρ > 0 be any integer. We denote χ ρ as any probability distribution on R n , where each coefficient is random in [ ρ , ρ ] and independent. We call χ ρ an error distribution or noise distribution. We allow for flexibility in the exact choice of χ ρ , but most commonly, χ ρ is chosen as uniform random on [ ρ , ρ ] , or a truncated discrete Gaussian to maintain security [24,25]. Over Z , the discrete Gaussian distribution D Z , α q assigns a probability proportional to exp ( π x 2 ( α q ) 2 ) for each x Z with standard deviation σ = α q 2 π [2,24]. For the cyclotomic polynomial ϕ ( x ) = x n + 1 , an n -dimensional extension of D Z , α q to R n can be constructed by process of sampling each coefficient from D Z , α q . More details on the impact of the error distribution on security can be found in Section 5.

LWE problems. For any secret s Z q n , we sample e χ ( Z ) from some desired distribution χ such that e ρ , where ρ is a desired parameter, we sample a uniform random a U ( Z q n ) , and calculate b via b [ a , s + e ] q . The ordered pair ( a , b ) Z q n × Z q is called an LWE sample. The search-LWE problem is to find s given many LWE samples. The decision-LWE problem is given many samples that are either LWE samples or sampled at uniform random from Z q n × Z q , decide, which distributions the samples are drawn from [2,24].

When drawing elements from distributions on R n and R n , q , we can similarly define the RLWE problems. For a secret s R n , q , sample a polynomial e χ ρ , sample a U ( R n , q ) , and compute b R n , q via b [ a s + e ] ϕ ( x ) , q . The ordered pair ( a , b ) R n , q 2 is called an RLWE sample. The RLWE problems can be defined in an analogous way to the LWE problems. Throughout this paper, when given an RLWE sample or similarly structured ordered pair, we will commonly refer to the polynomial e as the noise term and e as the noise size.

Most leveled homomorphic encryption schemes use RLWE as opposed to LWE. The ciphertexts of homomorphic encryption schemes discussed will all essentially take the form of a modified RLWE sample. Regev originally showed the hardness of the LWE problem [2], which serves as foundation for the security of homomorphic encryption schemes. We discuss more specifics on security in Section 5.

We remark that, in our schemes, we use noise size ρ = n , and the noise distribution can be uniform on integers bounded by ρ or a discrete Gaussian distribution but truncated at ρ . When using a bounded uniform distribution, our choice of noise has standard deviation σ of about n 3 . In comparison, most implementations in practice (including the homomorphic encryption standard) use a discrete Gaussian distribution with σ 3.2 [8,9,19,26]. Our larger noise bound increases the security, which will be discussed later.

2.3 Modulus reductions

Let Q and q be any positive integers. Given an RLWE sample ( a 0 , b 0 ) R n , Q 2 , where b 0 a 0 s + e 0 mod ( ϕ ( x ) , Q ) , we can compute a new RLWE sample ( a 0 , b 0 ) R n , q 2 satisfying b 0 a 0 s + e 0 mod ( ϕ ( x ) , q ) for some new noise term e 0 . Although this is a new RLWE sample with a new integer modulus q , the key observation is that the polynomial s remains the same. Furthermore, if given an initial bound on e 0 , we can guarantee a bound on e 0 dependent on Q and q . Algorithm 1 gives the procedure for modulus reduction, while Lemma 2.1 shows correctness and the resulting bound on e 0 . Although Q and q are any positive integers, we typically choose Q > q and refer to this procedure as a modulus reduction rather than a modulus switch as many other papers do. Note here that we use a subscript of 0 in our RLWE sample, as it will provide more consistency with our later applications of this algorithm to ciphertexts. For this reason, we also label ( a 0 , b 0 ) as ct 0 in Algorithm 1.

Algorithm 1. BFV modulus reduction

BFV.Modreduce ( ct 0 , Q , q )
Input: Q N an integer,
q N an integer,
ct 0 = ( a 0 , b 0 ) R n , Q 2 .
Output: ct 0 = ( a 0 , b 0 ) R n , q 2 .
Step 1. Compute a 0 q a 0 Q and b 0 q b 0 Q .
Step 2. Return ct 0 = ( a 0 , b 0 ) R n , q 2 .

Lemma 2.1

Suppose the input ct 0 of Algorithm 1 is an RLWE sample such that e 0 E . Let ct 0 be the output of Algorithm 1. Then,

b 0 a 0 s + e 0 mod ( ϕ ( x ) , q )

and e 0 q Q E + δ R s + 1 2 . Furthermore, if Q q > 2 E δ R s 1 , then e 0 < δ R s .

What will be more useful than a modulus reduction for a generic RLWE sample will be a modulus reduction for a “modified” RLWE sample that takes the form of a standard BFV ciphertext, hence the algorithm name BFV.Modreduce. By a BFV ciphertext, we mean that our input for Algorithm 1 ct 0 = ( a 0 , b 0 ) R n , Q 2 satisfies

b 0 + a 0 s D Q m 0 + e 0 mod ( ϕ ( x ) , Q )

for some noise term e 0 R n , where m 0 R n , t and D Q is a positive integer. This format is further clarified in Section 3.1. Classic BFV [6] uses D Q = Q t . In this article, we will always assume t ( Q 1 ) for any given ciphertext modulus Q , meaning that D Q = ( Q 1 ) t when using the same floor definition as in classic BFV. The resulting output ct 0 = ( a 0 , b 0 ) R n , q 2 of Algorithm 1 satisfies b 0 + a 0 s D q m 0 + e 0 mod ( ϕ ( x ) , q ) for some e 0 , where D q = ( q 1 ) t when t ( q 1 ) . Algorithm 1 and the proof of Lemma 2.2 have a similar style to the proof of Lemma 2.3 in [12].

Lemma 2.2

Suppose the input of Algorithm 1 is a BFV ciphertext such that e 0 E . Let ct 0 be the output of Algorithm 1. If t ( Q 1 ) and t ( q 1 ) , then

b 0 + a 0 s D q m 0 + e 0 mod ( ϕ ( x ) , q )

and e 0 q Q E + 1 + δ R s 2 . Furthermore, if Q q > 2 E δ R s 2 , then e 0 < δ R s .

The final reformulation essentially states that if Q q meets a specific bound depending on E , modulus reduction always produces a new noise term e 0 bounded by δ R s . A similar algorithm and lemma can also be constructed for a standard BGV ciphertext [4,5], which is an ordered pair ct 0 = ( a 0 , b 0 ) R n , Q 2 satisfying

b 0 + a 0 s m 0 + t e 0 mod ( ϕ ( x ) , Q )

for some noise term e 0 and given message m 0 R n , t , which is the message space for some integer t > 1 . The procedure for computing the new ciphertext differs from the previous two modulus reduction algorithms. In particular, we use the procedure outlined in Lemma 4.3.1 of [27], which is given here as Algorithm 2.

Algorithm 2. BGV modulus reduction

BGV.Modreduce ( ct 0 , Q , q )
Input: Q N , an integer,
q N , an integer,
ct 0 = ( a 0 , b 0 ) R n , Q 2 , BGV ciphertext.
Output: ct 0 = ( a 0 , b 0 ) R n , q 2 , BGV ciphertext.
Step 1. Compute
ω a [ a 0 q t 1 ] Q  and   ω b [ b 0 q t 1 ] Q .
Step 2. Compute
a 0 q a 0 + t ω a Q q and b 0 q b 0 + t ω b Q q .
Step 3. Return ct 0 = ( a 0 , b 0 ) R n , q 2

Lemma 2.3

Suppose the input of Algorithm 2 is a BGV ciphertext such that e 0 E . Let ct 0 be the output of Algorithm 2. If t ( Q 1 ) and t ( q 1 ) , then

b 0 + a 0 s m 0 + t e 0 mod ( ϕ ( x ) , q )

and e 0 q Q E + 1 + δ R s 2 . Furthermore, if Q q > 2 E δ R s 2 , then e 0 < δ R s .

3 Homomorphic encryption schemes and noise bounds

Most homomorphic encryption schemes in the literature use a modified version of RLWE to hide messages. In this section, we will cover three main schemes: BFV [6], BGV [4,5], and CKKS [7]. For these three schemes, we present modified versions where the noise sizes are improved and always controlled by a fixed bound, namely ρ = δ R s .

Overview of specifications. Before outlining our specific algorithms, we first provide an overview of specifications for parameters and spaces. Although there are variations between the schemes, the parameter choices outlined below work for all of BFV, BGV, and CKKS (when applicable). These parameter conditions ensure proper functionality regarding homomorphic computation for each scheme. Further caution must be taken when choosing parameters in practice to ensure security, which is discussed in Section 5.

Specifications for homomorphic encryption schemes

Public Parameters : q N p 0 N with p 0 5 δ R + 3 p 1 N with p 1 6 q t N with t ( q 1 ) , t ( p 0 1 ) , and t ( p 1 1 ) Plaintext : m R n , t for BFV and BGV m C n 2 for CKKS Secret key : sk R n , 3 Public key : pk R n , p 0 q 2 Evaluation key : ek R n , p 1 q 2 Ciphertext : ct R n , q 2 Noise bound : ρ = δ R s Noise distribution : χ ρ , a probabilistic distribution on R n with each coefficient random in [ ρ , ρ ] .

We want to emphasize that each coefficient of the distribution χ ρ can be uniform random on [ ρ , ρ ] , or any sub-Gaussian truncated by the bound ρ , or any other distribution that is bounded by ρ . All the noise bounds in this article will be valid, since they depend only on the worst-case bound ρ . We will simply say “Sample e χ ρ ” in all the algorithms for this generic distribution.

The bound ρ appears prominently in our algorithms. It is not just a bound on the noise distribution, but also a worst-case bound for both fresh ciphertexts from public key encryption and new ciphertexts after modulus reduction when going from one level to the next level, as indicated by the lemmas in the previous section. In practical implementations, we often choose n to be a power of 2 and ϕ ( x ) = x n + 1 , hence δ R = n . When s = 1 , we have ρ = δ R s = n .

Remark on message encoding and choice of t . In the BFV scheme, we choose to encode a message polynomial m as D q m , where D q = ( q 1 ) t , which requires that q 1 is divisible by t . Kim et al. [16] proposes to encode m as q m t , which works for any t and q , hence no restriction that t ( q 1 ) . An extra small amount of noise is introduced from their encryption style, but has minimal impact and gives about the same bounds in our lemmas in the case of t dividing q 1 that we consider. This alternate encryption style is slightly more expensive from a computation standpoint, as additional rounding operations must be performed as opposed to just integer multiplication. We refer the reader the study by Kim et al. [16] for more details on plaintext modulus choice and SIMD.

3.1 Modified BFV scheme

BFV key generation. The key generation process we use is slightly different from the standard BFV scheme [6] in that the public key and evaluation key are generated in a larger modulus [6,14,16,19], which will be useful for reducing the noise size in ciphertexts. Algorithm 3 gives the key generation for the BFV keys needed, which is the secret key sk, the public key pk, and the evaluation key ek. Here, sk is kept secret, while pk and ek are published. The public key pk = ( k 0 , k 1 ) R n , p 0 q 2 satisfies

k 1 + k 0 s e mod ( ϕ ( x ) , p 0 q )

for noise term e χ ρ . The evaluation key ek = ( k 0 , k 1 ) R n , p 1 q 2 satisfies

k 0 + k 1 s p 1 s 2 + e 1 mod ( ϕ ( x ) , p 1 q )

for noise term e 1 χ ρ . We remark that in Algorithm 3 we choose s randomly rather than specifying the sampling distribution. This is intentional, as s may be desired to be chosen to satisfy certain properties in practice. For instance, s is often chosen randomly with a predetermined Hamming weight in practice.

Algorithm 3. BFV key generation.

BFV.Keygen ( q , p 0 , p 1 )
Input: q N ,
p 0 N with p 0 5 δ R + 3 ,
p 1 N with p 1 6 q .
Output: sk = s R n , 3 secret key,
pk = ( k 0 , k 1 ) R n , p 0 q 2 public key,
ek = ( k 0 , k 1 ) R n , p 1 q 2 evaluation key.
Step 1. Choose randomly s R n , 3 .
Step 2. Sample k 0 U ( R n , p 0 q ) and e χ ρ .
Compute k 1 [ ( k 0 s + e ) ] ϕ ( x ) , p 0 q .
Step 3. Sample k 1 U ( R n , p 1 q ) and e 1 χ ρ .
Compute k 0 [ k 1 s + p 1 s 2 + e 1 ] ϕ ( x ) , p 1 q .
Step 4. Return sk = s , pk = ( k 0 , k 1 ) , and ek = ( k 0 , k 1 ) .

BFV encryption and decryption. We encrypt a message m 0 R n , t using a modified version of the standard public key procedure for BFV. Note that we choose the plaintext modulus t so that t divides p 0 1 and q 1 , and therefore divides p 0 q 1 . We immediately reduce the ciphertext modulus from p 0 q to q before adding the message bits, then return the ciphertext. The purpose of this is to ensure the noise term on the returned ciphertext ct 0 is within the constant bound of ρ . Given our description of the secret key selection, it is obvious that s = 1 . Most results we provide can be easily modified for the case of general s however, allowing for some flexibility in key generation if desired. The exact choices of p 0 and q will of course depend on several factors, such as the desired number of homomorphic computations. We will further discuss the choices of these in Section 4. Assuming these parameters, Algorithm 4 describes the encryption procedure for BFV. In many algorithms, we will refer to inputs as “BFV ciphertexts.” By this, we mean some ordered pair ct = ( a , b ) R n , q 2 satisfying

b + a s D q m + e mod ( ϕ ( x ) , q )

for some message m R n , t , some noise term e R n , ciphertext modulus q , and constant D q = q t = ( q 1 ) t . This encryption style is the classic form of BFV encryption [6]. If e E , we will say ct = ( a , b ) is a BFV ciphertext with noise bounded by E .

Algorithm 4. Modified BFV encryption

BFV.Encrypt ( m 0 , D q , pk )
Input: m 0 R n , t message,
D q N constant,
pk = ( k 0 , k 1 ) R n , p 0 q 2 public key.
Output: ct 0 = ( a 0 , b 0 ) R n , q 2 BFV ciphertext.
Step 1. Sample u U ( R n , 3 ) and sample e 1 , e 2 χ ρ .
Step 2. Compute ( a 0 , b 0 ) R n , p 0 q 2 where
a 0 [ k 0 u + e 1 ] ϕ ( x ) , p 0 q ,
b 0 [ k 1 u + e 2 ] ϕ ( x ) , p 0 q .
Step 3. Compute
( a 0 , b 0 * ) BFV.Modreduce ( ( a 0 , b 0 ) , p 0 q , q ) ,
b 0 [ b 0 * + D q m 0 ] q .
Step 4. Return ct 0 = ( a 0 , b 0 ) R n , q 2 .

Lemma 3.1 provides correctness and the corresponding noise bound resulting from encryption. The bounds in Lemma 3.1 are assuming that s = u = 1 . However, the bounds can be discussed in terms of more generic u and s , in which case the bound condition on p 0 is p 0 > 2 δ R 2 ( u + s ) + 2 δ R δ R s 1 . In this case, the resulting noise term from encryption still satisfies e 0 < ρ .

Lemma 3.1

Let ct 0 be the output of Algorithm 4. Suppose that s = 1 , t ( p 0 1 ) , t ( q 1 ) , and p 0 > 4 δ R 2 + 2 δ R δ R 1 . Then ct 0 is a BFV ciphertext with noise bounded by ρ .

We argue that when δ R 16 , the condition on p 0 in Lemma 3.1 is satisfied when p 0 is chosen so that p 0 5 δ R + 3 as per our parameter specifications, since

5 δ R + 3 > 32 7 δ R + 16 7 = 16 7 ( 2 δ R + 1 ) = 2 δ R ( 2 δ R + 1 ) 7 8 δ R 4 δ R 2 + 2 δ R δ R 1 .

This technique of encryption with a built-in modulus reduction in Step 3 was first mentioned in [19], but is overall not especially well outlined in the literature. Implementations do often reduce the modulus immediately after encryption to reduce noise. For instance, Microsoft SEAL [28] chooses p 0 as a “special prime,” then generates all keys with an integer modulus of p 0 q (for q a product of some primes) before reducing down to integer modulus q to house any ciphertext data. SEAL documentation recommends choosing this special prime p 0 to be at least as big as any prime divisor of q , though it is not a strict requirement. In our modification, we propose computing the modulus reduction locally during encryption and doing so before adding the message bits. The advantage to this approach is we can choose p 0 to be much smaller.

Algorithm 5 provides for the decryption of a BFV ciphertext, which is the standard BFV decryption.

Algorithm 5. BFV decryption

BFV.Decrypt ( ct 0 , sk )
Input: ct 0 = ( a 0 , b 0 ) R n , q 2 BFV ciphertext,
sk = s R n , 3 secret key.
Output: m 0 R n , t message.
Step 1. Compute c [ b 0 + a 0 s ] ϕ ( x ) , q .
Step 2. Compute m 0 t c q t .
Step 3. Return m 0 .

Lemma 3.2

If the input ct 0 of Algorithm 5 is a BFV ciphertext with noise bounded by ( D q 1 ) 2 and t ( q 1 ) , then the decryption in Algorithm 5 is correct.

BFV additions and linear combinations. We allow for linear combinations of ciphertexts with scalars from Z . Based on the sum of absolute values of these scalars, we can guarantee a bound on the resulting ciphertext noise. In particular, we discuss the case of linear combinations with scalars α 0 , , α k 1 Z such that i = 0 k 1 α i M . Assuming each input ciphertext has noise bounded by E , a linear combination of k ciphertexts using these scalars results in noise bounded by M ( E + 1 ) . Algorithm 6 gives the algorithm for linear combinations, while Lemma 3.3 gives the resulting noise bound for BFV ciphertexts.

Algorithm 6. Linear combinations

Linearcombo ( ct 0 , , ct k 1 , α 0 , , α k 1 )
Input: ct 0 , , ct k 1 R n , q 2 (or ct 0 , , ct k 1 R n , q 3 ),
α 0 , , α k 1 Z scalars.
Output: ct 0 R n , q 2 (or ct 0 R n , q 3 ).
Step 1. Set ct 0 [ 0 , 0 ] (or [ 0 , 0 , 0 ] ).
For i from 0 to k 1 do
ct 0 [ ct 0 + α i ct i ] q .
Step 2. Return ct 0 .

Lemma 3.3

Suppose the inputs of Algorithm 6 are BFV ciphertexts each with noise bounded by E and suppose i = 0 k 1 α i M . Let ct 0 be the output of Algorithm 6. If t ( q 1 ) , then ct 0 is a BFV ciphertext with noise bounded by M ( E + 1 ) .

We remark that we allow for inputs of Algorithm 6 to also be in R n , q 3 . For the input of Algorithm 6, when using elements of the form ( c 0 , c 1 , c 2 ) R n , q 3 satisfying

c 0 + c 1 s + c 2 s 2 D q m + e mod ( ϕ ( x ) , q )

for some m R n , t and e R n , we still refer to e as a noise term (and refer to a bound on e as a noise bound). If each input has noise bounded by E , we can slightly adjust the proof of Lemma 3.3 to obtain an element in R n , q 3 with noise bounded by M ( E + 1 ) from the output of Algorithm 6. This alternate choice of inputs will be utilized when discussing budgeted operations in Section 4.1.

BFV multiplication. As expected, multiplication incurs much bigger increase in ciphertext noise and more tedious noise analysis. The procedure is again standard for the BFV scheme as in the study by Fan and Vercauteren [6]. The proof is similar to that presented in the study by Fan and Vercauteren [6], but we give a simpler worst-case noise bound with Lemma 3.4.

Algorithm 7. BFV multiplication

BFV.Multiply ( ct 0 , ct 1 )
Input: ct 0 = ( a 0 , b 0 ) , ct 1 = ( a 1 , b 1 ) R n , q 2 BFV ciphertexts.
Output: ( c 0 , c 1 , c 2 ) R n , q 3 .
Step 1. Compute
c 0 [ b 0 b 1 ] ϕ ( x ) , c 1 [ b 1 a 0 + b 0 a 1 ] ϕ ( x ) , c 2 [ a 0 a 1 ] ϕ ( x ) .
Step 2. Compute
c 0 t c 0 q , c 1 t c 1 q , c 2 t c 2 q .
Step 3. Return ( c 0 , c 1 , c 2 ) .

Lemma 3.4

Suppose the inputs of Algorithm 7 are BFV ciphertexts for messages m 0 and m 1 respectively, both with noise bounded by E. Let ( c 0 , c 1 , c 2 ) be the output of Algorithm 7. If t ( q 1 ) and δ R 16 , then

(1) c 0 + c 1 s + c 2 s 2 D q [ m 0 m 1 ] ϕ ( x ) , t + e mod ( ϕ ( x ) , q ) ,

with e 3.5 E t ρ 2 .

The simple bound provided in Lemma 3.4 will allow us to choose parameters easily and stack moduli as we will do in Section 4, while having minimal influence on functionality.

Comparison to current bounds. As mentioned earlier, we can see that our bound is on the order of E t δ R 2 when choosing s =1, where E is the bound on the noise term of each ciphertext before multiplication. Comparing to more current works, [16] achieves a similar multiplication noise bound. Rather than restricting choices of t and q , [16] achieves this bound by an alternative encryption method, namely, by computing b 0 + a 0 s q m t + e mod ( ϕ ( x ) , q ) rather than standard BFV, which computes b 0 + a 0 s q t m + e mod ( ϕ ( x ) , q ) . We provide a short comparison of multiplication noise bounds, with e being the noise term resulting from multiplication. Most notably, we assume two ciphertext noise terms are both bounded by E rather than having separate input bounds. Note this comparison does not include relinearization noise. We discuss the additional relinearization noise accumulated for our modified BFV scheme in Lemma 3.5.

Classic BFV [6]:

e 2 δ R t E ( 1 + δ R s ) + 2 δ R 2 t 2 ( s + 1 ) 2 .

Improved BFV [16]:

e δ R t 2 2 E q + ( 4 + δ R s ) 2 E + 1 + δ R s + δ R 2 s 2 2 .

Our BFV variant:

e 3.5 E t δ R 2 s 2 .

In the study by Kim et al. [16], the dominant noise term is δ R t 2 ( δ R s ) 2 E = E t δ R 2 s . We note that, by going through their proof for the worst-case bound, the factor δ R 2 in their bound should be δ R s , which is what we used in our proof. Hence, the dominant term should be 2 E t δ R 2 s . In comparison, our bound for all the noise terms is 3.5 E t δ R 2 s 2 , which is slightly bigger than their bound. Our goal was to provide a simple bound that is easier to use in practice. We will expand upon how we use this simple bound further in Section 4.

BFV relinearization. To convert a returned ciphertext from Algorithm 7 back to the proper form of a BFV ciphertext, we can employ a relinearization (or keyswitch) algorithm [6]. The algorithm converts a linear form in s and s 2 to a linear form in only s , while introducing a small additional noise term. Note that to accomplish this, we must use the published evaluation key, from Algorithm 3, denoted as ek. Algorithm 8 gives the relinearization algorithm for BFV. Lemma 3.5 provides for the resulting noise bound.

Algorithm 8. BFV Relinearization

BFV.Relinearize ( ( c 0 , c 1 , c 2 ) , ek )
Input: ( c 0 , c 1 , c 2 ) R n , q 3 polynomial ordered triple,
ek = ( k 0 , k 1 ) R n , p 1 q 2 evaluation key.
Output: ( c 0 , c 1 ) R n , q 2 .
Step 1. Compute β 0 [ c 2 k 0 ] ϕ ( x ) , p 1 q and β 1 [ c 2 k 1 ] ϕ ( x ) , p 1 q .
Step 2. Compute d 0 β 0 p 1 and d 1 β 1 p 1 .
Step 3. Compute c 0 [ c 0 + d 0 ] q and c 1 [ c 1 + d 1 ] q .
Step 4. Return ( c 0 , c 1 ) .

Lemma 3.5

Let ( c 0 , c 1 ) be the output of Algorithm 8 and suppose the input ( c 0 , c 1 , c 2 ) satisfies (1) in Lemma 3.4. If p 1 6 q and δ R 16 , then ( c 0 , c 1 ) is a BFV ciphertext with noise bounded by 3.6 E t ρ 2 .

Alternate relinearization technique. Algorithm 8 is not the only option for relinearizing a ciphertext. Another technique [6,14,27,29] involves generating the evaluation key differently, by expanding c 2 with respect to some integer base. In this relinearization process, the evaluation key is a vector pair ek = ( u , v ) ( R n , q γ ) 2 where each entry of u is sampled from U ( R n , q ) , and γ and v are computed in the following way. For chosen public base B N , find the smallest γ N such that B γ > q and define g R n , q γ as follows:

g T = 1 , B , B 2 , , B γ 1 .

Let w R n , q γ a vector with each entry sampled from χ ρ . Compute v as

v s 2 g u s + w ( mod ϕ ( x ) , q ) .

To obtain a new relinearized ciphertext from ( c 0 , c 1 , c 2 ) , one can first write

c 2 = j = 0 γ 1 h j B j ,

where h j R n , q such that h j B 2 and define h T R n , q γ as h = ( h 0 , h 1 , , h γ 1 ) . Then

c 2 s 2 = h ( s 2 g ) .

Using ek = ( u , v ) , the new ciphertext can be computed as ( [ c 1 + h u ] ϕ ( x ) , q , [ c 0 + h v ] ϕ ( x ) , q ) since c 2 s 2 + h w h v + h u s mod ( ϕ ( x ) , q ) . Here, h w is the noise introduced during relinearization and satisfies h w ( γ B δ R 2 s ) 2 . However, this technique is less used since the evaluation key ( u , v ) ( R n , q γ ) 2 is much larger than the evaluation key generated in Algorithm 3. Specifically, ek = ( u , v ) is of size 2 γ log 2 q . The noise incurred by relinearization grows linearly with B ; hence, B must be relatively small, which means γ will likely be much bigger than 4. On the other hand, ek = ( k 0 , k 1 ) from Algorithm 3 is of size 4 log 2 q . Although Algorithm 3 gives a smaller key size, a larger ring dimension n must be used to maintain security. We refer the reader to the references mentioned earlier for details. Some implementations of BFV such as Microsoft SEAL [28] do not realinearize their ciphertexts after each multiplication and allow the degree of the linear form s to grow larger than 2 [30].

3.2 Modified BGV scheme

BGV key generation. As we did with BFV, we use a slightly different key generation process from the standard BGV scheme [4,5] by generating the public key and evaluation key in a larger modulus to reduce noise sizes in ciphertexts. Algorithm 9 gives the key generation for the BGV keys. Just like BFV, sk is kept secret, while pk and ek are published.

Algorithm 9. BGV key generation

BGV.Keygen ( q , p 0 , p 1 )
Input: q N ,
p 0 N with p 0 5 δ R + 3 ,
p 1 N with p 1 6 q .
Output: sk = s R n , 3 secret key,
pk = ( k 0 , k 1 ) R n , p 0 q 2 public key,
ek = ( k 0 , k 1 ) R n , p 1 q 2 evaluation key.
Step 1. Choose randomly s R n , 3 .
Step 2. Sample k 0 U ( R n , p 0 q ) and e χ ρ .
Compute k 1 [ ( k 0 s + t e ) ] ϕ ( x ) , p 0 q .
Step 3. Sample k 1 U ( R n , p 1 q ) and e 1 χ ρ .
Compute k 0 [ k 1 s + p 1 s 2 + t e 1 ] ϕ ( x ) , p 1 q .
Step 4. Return sk = s , pk = ( k 0 , k 1 ) , and ek = ( k 0 , k 1 ) .

BGV encryption and decryption. We define the BGV public key encryption in Algorithm 10. Decryption of a BGV ciphertext is given in Algorithm 11. When we refer to a “BGV ciphertext” in these algorithms and lemmas, we mean an ordered pair ct = ( a , b ) R n , q 2 satisfying

b + a s m + t e mod ( ϕ ( x ) , q )

for some noise term e R n and given message m R n , t . Lemma 3.6 provides proof of correctness for encryption, as well as the corresponding noise bound resulting from encryption.

Algorithm 10. Modified BGV encryption

BGV.Encrypt ( m 0 , pk )
Input: m 0 R n , t message,
pk = ( k 0 , k 1 ) R n , p 0 q 2 public key.
Output: ct 0 = ( a 0 , b 0 ) R n , q 2 BGV ciphertext.
Step 1. Sample u U ( R n , 3 ) , and sample e 1 , e 2 χ ρ .
Step 2. Compute ( a 0 , b 0 ) R n , p 0 q 2 , where
a 0 [ k 0 u + t e 1 ] ϕ ( x ) , p 0 q ,
b 0 [ k 1 u + t e 2 ] ϕ ( x ) , p 0 q .
Step 3. Compute
( a 0 , b 0 * ) BGV.Modreduce ( ( a 0 , b 0 ) , p 0 q , q ) ,
b 0 [ b 0 * + m 0 ] q .
Step 4. Return ct 0 = ( a 0 , b 0 ) R n , q 2 .

Lemma 3.6

Let ct 0 be the output of Algorithm 10. Suppose that s = 1 , t ( p 0 1 ) , t ( q 1 ) , and p 0 > 4 δ R 2 + 2 δ R δ R 2 . Then ct 0 is a BGV ciphertext with noise bounded by ρ .

Algorithm 11. BGV decryption

BGV.Decrypt ( ct 0 , sk )
Input: ct 0 = ( a 0 , b 0 ) R n , q 2 BGV ciphertext,
sk = s R n , 3 secret key.
Output: m 0 R n , t message.
Step 1. Compute c [ b 0 + a 0 s ] ϕ ( x ) , q .
Step 2. Compute m 0 [ c ] t .
Step 3. Return m 0 .

Just as with BFV, the condition on p 0 can be discussed in the more general case for any choice of u and s , in which the condition on p 0 is p 0 > 2 δ R 2 ( u + s ) + 2 δ R δ R s 2 and the resulting noise term e 0 satisfies e 0 < ρ . Similar to BFV as well, we argue that when δ R 16 , the condition on p 0 in Lemma 3.6 is satisfied when p 0 is chosen so that p 0 5 δ R + 3 as per our parameter specifications since

5 δ R + 3 > 32 7 δ R + 16 7 = 16 7 ( 2 δ R + 1 ) = 2 δ R ( 2 δ R + 1 ) 7 8 δ R 4 δ R 2 + 2 δ R δ R 2 .

Regarding decryption, the proof of correctness for Algorithm 11 is straightforward. Simply observe that [ [ b 0 + a 0 s ] ϕ ( x ) , q ] t = [ m 0 + t e 0 ] t = m 0 . The key observation here is that in order for correctness to hold, it is required that m 0 + t e 0 < q 2 . That is, fully reducing b 0 + a 0 s modulo q will actually yield the correct polynomial m 0 + t e 0 . The worst-case bound on m 0 + t e 0 is t 2 + t E if e 0 E . Hence, it suffices to require E < q 2 t 1 2 . This is very similar to the condition given earlier needed for correct BFV decryption.

BGV additions and linear combinations. Additions and linear combinations for BGV can be done using Algorithm 6. The argument is similar to Lemma 3.3 for BFV ciphertexts and results in the same noise bound of M ( E + 1 ) . It is worth noting that divisibility of q 1 by t yields no noise improvement for BGV addition, and the noise bound of M ( E + 1 ) holds for any t and q .

BGV multiplication. Again, multiplication incurs large noise increase during homomorphic computation. Unlike BFV, there is no requirement that t divides q 1 , and the noise analysis for BGV multiplication is simpler than BFV, as no scaling by t q is required after computing the necessary components given from ct 0 and ct 1 . Algorithm 12 outlines the procedure for BGV multiplication. Lemma 12 provides for proof of correctness and the corresponding noise bound.

Algorithm 12. BGV multiplication

BGV.Multiply ( ct 0 , ct 1 )
Input: ct 0 = ( a 0 , b 0 ) , ct 1 = ( a 1 , b 1 ) R n , q 2 ciphertexts.
Output: ( c 0 , c 1 , c 2 ) R n , q 3 .
Step 1. Compute
c 0 [ b 0 b 1 ] ϕ ( x ) , q ,
c 1 [ b 1 a 0 + b 0 a 1 ] ϕ ( x ) , q ,
c 2 [ a 0 a 1 ] ϕ ( x ) , q .
Step 2. Return ( c 0 , c 1 , c 2 ) .

Lemma 3.7

Suppose the inputs of Algorithm 12 are BGV ciphertexts for messages m 0 and m 1 , respectively, both with noise bounded by E. Let ( c 0 , c 1 , c 2 ) be the output of Algorithm 12. Then

(2) c 0 + c 1 s + c 2 s 2 [ m 0 m 1 ] ϕ ( x ) , t + t e mod ( ϕ ( x ) , q )

with e 2 δ R t ( E 2 + 1 ) .

BGV relinearization. We can relinearize a BGV ciphertext to rewrite the left hand side of equation (2) as a linear form in only s rather than s and s 2 . For BGV, a slightly modified evaluation key must be generated, as well as a slightly modified relinearization algorithm. Algorithms 9 and 13 give the BGV evaluation key generation and relinearization respectively, which we have based on the algorithms in [16]. Lemma 3.8 provides proof of correctness of Algorithm 13 and the corresponding noise bound. Algorithm 13 and the result of Lemma 3.8 can be combined with Algorithm 12 and the result of Lemma 12, respectively, to obtain a full BGV multiplication operation.

Lemma 3.8

Let ( c 0 , c 1 ) be the output of Algorithm 13 and suppose the input ( c 0 , c 1 , c 2 ) satisfies (2) in Lemma 3.7. If p 1 6 q and δ R 16 , then ( c 0 , c 1 ) is a BGV ciphertext with noise bounded by 2 δ R t ( E 2 + 1 ) + 1 8 δ R 2 s .

Algorithm 13. BGV relinearization.

BGV.Relinearize ( ( c 0 , c 1 , c 2 ) , ek )
Input: ( c 0 , c 1 , c 2 ) R n , q 3 ,
ek = ( k 0 , k 1 ) R n , p 1 q 2 evaluation key.
Output: ( c 0 , c 1 ) R n , q 2 .
Step 1. Compute β 0 [ c 2 k 0 ] ϕ ( x ) , p 1 q and β 1 [ c 2 k 1 ] ϕ ( x ) , p 1 q .
Step 2. Compute ω 0 [ t 1 β 0 ] p 1 and ω 1 [ t 1 β 1 ] p 1 .
Step 3. Compute d 0 β 0 + t ω 0 p 1 and d 1 β 1 + t ω 1 p 1 .
Step 4. Compute c 0 [ c 0 + d 0 ] q and c 1 [ c 1 + d 1 ] q .
Step 5. Return ( c 0 , c 1 ) .

3.3 Modified CKKS scheme

In this section, we will discuss the CKKS scheme [7]. CKKS allows for homomorphic encryption for arithmetic of approximate numbers rather than arithmetic exactly as BFV and BGV do. This is done by first taking in data as some vector over C , mapping the components into R n , and then performing the homomorphic computation before mapping back to a vector over C . This process of mapping to and from the C -vector space is known as the encoding and decoding procedures, respectively. Throughout Section 3.3 and whenever referring to CKKS, we will always assume ϕ ( x ) = x n + 1 , where n is a power of two. Thus, δ R = n .

Message encoding and decoding. Recall that R n = Z [ x ] ( ϕ ( x ) ) and R n , q = Z q [ x ] ( ϕ ( x ) ) . Let H = { z C n : z j = z ¯ n j + 1 } . Define two mappings:

π : H C n 2 , σ : C [ x ] ( ϕ ( x ) ) C n .

Here, π is the projection of H onto C n 2 , by keeping only the first half of the entries for each vector in H , and σ is the canonical embedding map defined as follows. Note that the polynomial ϕ ( x ) = x n + 1 has n complex roots, say ζ 1 , ζ 2 , , ζ n in any fixed order, which are all primitive roots of unity of order 2 n . Given a polynomial h C [ x ] ( ϕ ( x ) ) , σ is defined via

σ ( h ) = ( h ( ζ 1 ) , h ( ζ 2 ) , , h ( ζ n ) ) C n .

That is, σ evaluates h at all the roots of ϕ ( x ) and stores the evaluations as a vector. Note that both π and σ serve as isomorphisms of vector spaces over C , so π 1 and σ 1 exist. In practice, σ is computed via a fast Fourier transform (FFT), and σ 1 by an inverse fast Fourier transform (FFT 1 ).

The purpose of these mappings is that given a message vector z C n 2 , we want to convert it into a polynomial in R n whose values at ζ i correspond to w = π 1 ( z ) , hence polynomial multiplication corresponds to component-wise multiplication for message vectors. We now must map π 1 ( z ) into R n . Given ζ 1 , ζ 2 , , ζ n in a fixed order such that ( ζ 1 , ζ 2 , , ζ n ) H , σ then serves as an isomorphism between R [ x ] ( ϕ ( x ) ) and H . So, for w H , we can compute σ 1 ( w ) R [ x ] ( ϕ ( x ) ) and then round each coefficient to obtain an element in R n .

It is worth noting that most texts use a technique called coordinate-wise random rounding instead of rounding to the nearest integer [25]. However, we will use the closest integer rounding. As we will see, this step of rounding causes accuracy loss in the message. To avoid this, we scale by some positive integer Δ to preserve some desired precision of our message in the end result. The message encoding function is defined as follows:

Ecd ( z , Δ ) = σ 1 ( Δ π 1 ( z ) ) R n ,

for any message z C n 2 , and the message decoding function is defined as follows:

Dcd ( m , Δ ) = π ( σ ( Δ 1 m ) ) ,

for any polynomial m R n .

The encryption and decryption procedures for CKKS then map between R n and R n , q . A high level overview of the mappings in CKKS is shown below. Note that q is used for the integer modulus of the ciphertext space after homomorphic computation, as we may have a different integer modulus if we perform any modulus reduction.

3.3

Note the scaling factor Δ affects the ending precision and is usually chosen proportionally to the moduli gaps, which is discussed later in this section. It is also worth mentioning that although Ecd is defined for all messages z C n 2 , in practice z is taken in the space of fixed precision numbers of some length, which is a subset of C n 2 .

The remainder of Section 3.3 is devoted to the homomorphic computation in R n , q 2 for the CKKS scheme. A significant observation for CKKS is how the homomorphic computation relates to the computation in C n 2 . In particular, for vectors z , z C n 2 , we denote z z as the Hadamard product of z and z (i.e., the vector obtained from component-wise multiplication between z and z ). Homomorphic multiplication in R n , q 2 of two ciphertexts corresponds with the Hadamard product of the respective vectors in C n 2 , whereas homomorphic addition corresponds with standard vector addition of the respective message vectors.

CKKS rescaling. Regarding modulus reduction in CKKS, a similar procedure known as rescaling occurs. The rescaling procedure is identical to the modulus reduction for BFV outlined in Algorithm 1. That is,

CKKS.Modreduce = BFV.Modreduce .

The main difference is the purpose of the procedure. Rather than using modulus reduction as a form of noise control, it is used here to control precision. For two message encodings m 0 , m 1 R n , ciphertext multiplication yields an encryption of the product m 0 m 1 , which takes up some less significant bits (LSBs). We rescale the corresponding ciphertext of m 0 m 1 to get rid of the lower significant digits to perform further computation where we want to keep only a fixed number of digits. While the rescaling serves a different purpose than modulus reduction, we can still discuss bounds on the corresponding error term achieved. Lemma 3.9 outlines our worst-case noise bound. When we refer to a “CKKS ciphertext” in these algorithms and lemmas, we mean an ordered pair ct = ( a , b ) R n , q 2 satisfying

b + a s m + e mod ( ϕ ( x ) , q )

for some noise term e R [ x ] ( ϕ ( x ) , q ) and m = Ecd ( z , Δ ) R [ x ] ( ϕ ( x ) , q ) for some z C n 2 .

Lemma 3.9

Suppose the input of Algorithm 1 is a CKKS ciphertext with noise bounded by E. Let ct 0 be the output of Algorithm 1. Then,

b 0 + a 0 s q Q m 0 + e 0 mod ( ϕ ( x ) , q )

and e 0 q Q E + 1 + δ R s 2 . Furthermore, if Q q > 2 E δ R s 1 , then e 0 < ρ .

A notable difference in CKKS rescaling is that the algorithm returns an encryption of q Q m 0 rather than the original m 0 encoding. As mentioned, this is intentional, as we wish to reduce the size of m 0 since bit usage becomes an issue. The reason we use a modulus reduction algorithm rather than simply trying to scale down the ciphertext is because we are taking entries modulo Q . For a ciphertext ( a 0 , b 0 ) R n , Q 2 , if we computed a scaled ciphertext ( a 0 Δ , b 0 Δ ) for some scaling factor Δ , we would first need to write b 0 + a 0 s m 0 + e 0 + Q r for a polynomial r R n . This would result in a term approximately equal to Q r Δ after dividing through by Δ , which is no longer equivalent to 0 mod Q and would result in a huge noise term. However, it is still important to choose Q q to be approximately Δ , or whatever desired scaling factor is needed. Accuracy of the approximation relies on this size of Q q . When not concerned with RNS representation, we can simply choose Δ = Q q exactly. In the RNS variant of CKKS [21], a bound is placed on the gap between Q q and Δ to ensure some precision, while still allowing for coprime moduli Q and q .

CKKS key generation. For CKKS, the keys used are generated in the same way that the BFV keys are generated. In this case, we refer the reader to Algorithm 3 for generation of the CKKS keys, which again includes the secret key sk, the public key pk, and the evaluation key ek.

CKKS encryption and decryption. The encryption algorithm is given by Algorithm 14, and decryption by Algorithm 15. Note that CKKS encryption in Algorithm 14 uses Algorithm 1 as a subroutine, which is the rescaling. From Step 2 of Algorithm 15, we obtain m 0 . However, recall that b 0 + a 0 s m 0 + e 0 mod ( ϕ ( x ) , q ) , so in this case, we really have that m 0 = m 0 + e 0 . In other words, m 0 is a close approximation of m 0 so long as e 0 is small. We do not include proofs that decryption works, as it is directly apparent from the algorithm that it decrypts to an approximation of the desired message. We also note that many texts, such as the original CKKS paper in the study by Cheon et al. [7], have separate steps for encoding/decoding and encryption/decryption. We include the encoding or decoding in the encryption or decryption algorithms, respectively. Aside from the encoding step, the encryption algorithm for CKKS is actually identical to a BFV encryption with D q = 1 . Lemma 3.10 provides for the corresponding noise bound after encryption. As with the other schemes, choosing p 0 5 δ R + 3 ensures the condition for p 0 in Lemma 3.10 holds. The condition on p 0 can also be generalized to p 0 > 2 δ R 2 ( u + s ) + 2 δ R δ R s 1 with the resulting noise still satisfying e 0 < ρ .

Algorithm 14. Modified CKKS encryption

CKKS.Encrypt ( z , Δ , pk )
Input: z C n 2 message,
Δ N scaling factor,
pk = ( k 0 , k 1 ) R n , p 0 q 2 public key.
Output: ct = ( a , b ) R n , q 2 CKKS ciphertext.
Step 1. Encode z by computing m Ecd ( z , Δ ) .
Step 2. Compute ct BFV.Encrypt ( m , 1 , pk ) .
Step 3. Return ct = ( a , b ) R n , q 2 .

Algorithm 15. CKKS decryption

CKKS.Decrypt ( ct , sk )
Input: ct = ( a , b ) R n , q 2 CKKS ciphertext,
sk = s R n , 3 secret key.
Output: z C n 2 message.
Step 1. Compute m [ b + a s ] ϕ ( x ) , q .
Step 2. Decode m by computing z Dcd ( m , Δ ) .
Step 3. Return z .

Lemma 3.10

Let ct be the output of Algorithm 14. Suppose s = 1 and p 0 > 4 δ R 2 + 2 δ R δ R 1 . Then ct is a CKKS ciphertext with noise bounded by ρ .

CKKS additions and linear combinations. We can perform additions and linear combinations with CKKS ciphertexts using Algorithm 6. The resulting ciphertext obtained from Algorithm 6 has a slightly different noise bound than BFV and BGV. The reason for this is that the encoded messages which the ciphertexts represent are in R n instead of R n , t , so reduction modulo t is not necessary with CKKS. The result is summarized in Lemma 3.11.

Lemma 3.11

Suppose the inputs of Algorithm 6 are CKKS ciphertexts each with noise bounded by E and suppose i = 0 k 1 α i M . Let ct 0 be the output of Algorithm 6. Then, ct 0 is a CKKS ciphertext with noise bounded by M E .

CKKS multiplication. Multiplication in CKKS follows the same process as BGV, which is given in Algorithm 12. Thus,

CKKS.Multiply = BGV.Multiply .

The difference is only a slightly different noise bound, due to the plaintexts not being in R n , t and thus not needing reduction modulo t . Lemma 3.12 outlines the result and proof of the corresponding noise bound.

Lemma 3.12

Suppose the inputs of Algorithm 12 are CKKS ciphertexts for messages m 0 and m 1 , respectively, both with noise bounded by E. Suppose that m 0 t 2 and m 1 t 2 . Let ( c 0 , c 1 , c 2 ) be the output of Algorithm 12. Then

(3) c 0 + c 1 s + c 2 s 2 m 0 m 1 + e mod ( ϕ ( x ) , q )

with e E t δ R + E 2 δ R .

CKKS relinearization. As with the other schemes, full multiplication can then be achieved by including the relinearization process discussed in Algorithm 8, so

CKKS.Relinearize = BFV.Relinearize .

The proof is almost identical to the proof in Lemma 3.5. The result for CKKS is given in Lemma 3.13.

Lemma 3.13

Let ( c 0 , c 1 ) be the output of Algorithm 8 and suppose the input ( c 0 , c 1 , c 2 ) satisfies (3) in Lemma 3.12, with m 0 t 2 and m 1 t 2 . If p 1 6 q and δ R 16 , then ( c 0 , c 1 ) is a CKKS ciphertext with noise bounded by E t δ R + E 2 δ R + 1 8 δ R 2 s .

Note on BFV versus CKKS. Initially, the formatting of ciphertexts in BFV and CKKS seem very similar. For a message m 0 R n , t , a BFV ciphertext ct 0 = ( a 0 , b 0 ) satisfies b 0 + a 0 s D q m 0 + e 0 mod ( ϕ ( x ) , q ) . In CKKS, the encoding step for a message z 0 C n 2 scales our message by a factor of Δ . That is, our CKKS ciphertext ct 0 = ( a 0 , b 0 ) satisfies b 0 + a 0 s m 0 + e 0 mod ( ϕ ( x ) , q ) , where m 0 = Ecd ( z 0 , Δ ) . In both equations for BFV and CKKS, we have a scaling factor attached to our message. In BFV, the message m 0 is directly multiplied by D q , while in CKKS, Δ multiplies the original message z 0 and is implicitly hiding in m 0 . Nonetheless, both have a scaling factor. In multiplication of both schemes, this scaling factor initially compounds in the first step. The two schemes handle this issue differently however, with BFV rescaling the individual degree 2 ciphertext components in Step 2 of Algorithm 2, before taking the computed polynomials modulo q . CKKS on the other hand computes the initial polynomials in multiplication and immediately reduces them modulo q , relinearizes the ciphertext, and then rescales the hidden Δ 2 back to Δ using modulus reduction in Algorithm 1. Part of the reason these schemes differ in where they rescale is due to the fact that D q Δ . Since D q = q t and t < q 2 , D q 2 > q and rescaling must occur before taking components modulo q . On the other hand, Δ in CKKS has more freedom in choice, as it is a parameter chosen by the user that can influence accuracy in the approximation of end result of computation.

3.4 Comparison to other noise bound analyses

Our noise analysis differs from previous works [6,7,16,17] in that we derive worst-case bounds based on worst-case bound assumptions on the error distribution. As a result, our correctness guarantees are deterministic; there is no probability of decryption error. In addition, we simplify the derived bounds into clean, closed-form expressions that will be useful for subsequent sections. This simplification comes at the cost of slightly looser bounds overall, with the effect being most pronounced in BFV multiplication. A detailed comparison of BFV multiplication appears near the end of Section 3.1.

For most operations, our bounds are very close to the worst-case results presented in the study by Kim et al. [16], though with a few important distinctions. First, in all of our modulus reduction lemmas, we explicitly bound the ratio Q q to ensure that the noise remains below a fixed threshold, which supports more precise composability in computations. Second, our modified encryption procedure enables much smaller choices of p 0 by performing modulus reduction before message embedding. This results in fresh ciphertexts with noise bounded by a constant, while preserving correctness. The same structure can be extended to CKKS rescaling during encryption. Since the message bits are not yet introduced at that stage, rescaling does not degrade precision.

In addition, under basic assumptions on δ R , we are able to significantly simplify the relinearization noise bounds. This is especially helpful in regimes with small plaintext modulus t , where relinearization noise can be more significant.

For concrete estimates, many works adopt δ R = 2 n as an expansion factor that holds with high probability. In contrast, we use the exact value δ R = n in later derivations. In the case of CKKS [7,17], the most significant differences in noise growth appear in fresh encryptions, relinearization (i.e., key switching), and rescaling. Across these operations, our analysis yields a dominant noise term of approximately n , compared to n in the aforementioned works. This is primarily due to our adoption of a strict worst-case model and the associated choice of expansion factor. For average-case analyses such as those presented in the study by Costache et al. [17], direct comparison is more nuanced, as noise growth depends on the variance of sampled noise terms.

4 Leveled schemes and RNS variants

For practical computation, we employ a leveled homomorphic encryption scheme rather than a fully homomorphic one. Unlike fully homomorphic schemes – which support an unlimited number of operations via costly bootstrapping – a leveled scheme supports a predetermined number of homomorphic operations, making it more efficient for realistic workloads. In this section, we outline leveled versions of the BFV, BGV, and CKKS schemes. The core idea is to carry out computations at decreasing modulus levels: perform a fixed number of operations at a given modulus, then reduce both the modulus and the noise to enable further computation.

Let q > q 1 > > q 0 > 1 be distinct primes, and define

Q i = j = 0 i q j , 0 i .

We refer to Q i as the modulus at level i or simply the level-i modulus. In Section 4.1, we describe how to select each q i so that a budgeted operation, called a depth-1 multiplication, can be performed at level Q i . Specifically, if the input ciphertexts at level Q i have noise bounded by ρ , then the resulting ciphertext – after multiplication and modulus switching to Q i 1 – continues to maintain the same noise bound ρ .

Section 4.2 details how ciphertext operations are performed in the RNS. By the Chinese remainder theorem, any polynomial a R n , Q can be represented as follows:

[ a ] = ( a ( 0 ) , a ( 1 ) , , a ( ) ) ,

where a ( i ) a mod q i , and = ( q 0 , q 1 , , q ) is called the modulus basis (or simply the basis). This representation is referred to as the RNS form of a .

All ciphertexts, public keys, and evaluation keys are stored in RNS form with respect to appropriate modulus bases. A key advantage of RNS is that addition and multiplication of polynomials can be performed component-wise, independently across the q i . However, operations such as modulus reduction and relinearization are more involved. In Section 4.2, we describe how these operations are implemented in RNS for the BFV, BGV, and CKKS schemes, and we present the associated noise bounds.

4.1 Budgeted operations at each level

For a collection of ciphertexts, we want to know how much homomorphic computation we can perform before ciphertext noise becomes too big to the point that no further computation can be performed. To do this, we introduce the concept of a depth-1 multiplication computation.

Definition 4.1

(Depth-1 multiplication) Suppose we have a collection of messages. For fixed k 1 and k 2 , we say that we can perform a depth-1 multiplication if we can perform 2 k 2 groups of k 1 1 additions, followed by one round of k 2 multiplications, followed by k 2 1 additions.

Figure 1 shows an arbitrary depth-1 multiplication with 2 k 2 k 1 plaintexts, where m j , k is a plaintext for each j = 1 , , 2 k 2 , k = 1 , , k 1 . Our goal to derive a bound on q i so that one can compute depth-1 multiplication homomorphically at each level i . To perform a depth-1 multiplication homomorphically for BFV, BGV, and CKKS, we introduce Algorithm 16.

Figure 1 
                  Plaintext depth-1 multiplication.
Figure 1

Plaintext depth-1 multiplication.

Algorithm 16. Depth-1 multiplication

Depth1 ( ct j , k , ek i , Q i , Q i 1 )
Input: ct j , k R n , Q i 2 , j = 1 , , 2 k 2 , k = 1 , , k 1 ciphertexts,
ek i R n , Q i 2 evaluation key at level i ,
Q i N integer modulus,
Q i 1 N integer modulus with Q i = q i Q i 1 .
Output: ct R n , Q i 1 2 .
Step 1. For j from 1 to 2 k 2 do
ct j Linearcombo ( ct j , 1 , , ct j , k 1 , 1 , , 1 ) .
Step 2. Initialize ct ( 0,0,0 ) .
For j from 1 to k 2 do
ct ct + Multiply ( ct 2 j 1 , ct 2 j ) .
Step 3. Compute ct Relinearize ( ct , ek i ) .
Step 4. Compute ct Modreduce ( Q i , Q i 1 , ct ) .
Step 5. Return ct .

In Algorithm 16, we remark that Multiply, Relinearize, and Modreduce call the respective algorithms for the inputted ciphertext type. For example, if each ct j , k is a BFV ciphertext, Algorithm 16 will use BFV.Multiply, BFV.Relinearize, and BFV.Modreduce, while Linearcombo is identical for all three ciphertext types. Note that ek i R n , p 1 Q i 2 is assumed to match the ciphertext type of the ct j , k ’s. Our relinearization takes place after summing together our ct j ’s obtained from Step 2, which are each a polynomial triple. This slightly improves our bounds, and is better from a computational perspective since we are only running Relinearize once in Algorithm 16.

To guarantee the amount of computation we can perform, we want to choose q i so that the output of Algorithm 16 is always a ciphertext with noise bounded by ρ when all the ct j , k inputs have noise bounded by ρ . The precise bound on q i is presented in Lemmas 4.1 and 4.2 for BFV and BGV, respectively.

Lemma 4.1

For any 1 i , suppose q i > 9 k 1 k 2 t n 2 and δ R 16 . Then, for a collection of BFV ciphertexts at level i all with noise bounded by ρ , the output of Algorithm 16 is a BFV ciphertext at level i 1 with noise bounded by ρ .

Lemma 4.2

For any 1 i , suppose q i > 4 k 1 2 k 2 t n 2 and δ R 16 . Then, for a collection of BGV ciphertexts at level i all with noise bounded by ρ , the output of Algorithm 16 is a BGV ciphertext at level i 1 with noise bounded by ρ .

For a similar depth-1 result in CKKS, we must use caution when finding conditions for q i . The reason for this is that in CKKS, we do not have much flexibility to choose q i . For the standard scheme, it is always assumed that q i = Δ for each i 0 . Thus, the best that we can do is to bound the error in general after computing the depth-1 algorithm. We cannot force the error down within a constant after rescaling without the assumption that q i Δ , which clearly contradicts the size of Δ needed in CKKS. We give the result on bounding CKKS noise below in Lemma 4.3.

Lemma 4.3

Let i be such that 1 i , and suppose that n 2 Δ and q i = Δ . Suppose we have a collection of CKKS ciphertexts at level i all with noise bounded by E . Furthermore, suppose that the corresponding messages z j C n 2 each satisfy z j Z . Then Algorithm 16 results in a CKKS ciphertext at level i 1 with noise bounded by 2 k 1 k 2 n E Z + k 1 k 2 E n Δ + k 1 2 k 2 E 2 n Δ + 1 8 .

One special case of a depth-1 multiplication is the inner product of vectors. That is, given vectors of messages m = ( m 1 , , m k ) R n , t k and m = ( m 1 , , m k ) R n , t k , we want to compute m , m R n , t homomorphically. This can be thought of as one round of k products between the corresponding ciphertexts of m 1 , , m k and m 1 , , m k , followed by k 1 additions to sum them together. This is simply a depth-1 multiplication with k 1 = 1 and k 2 = k . Alternatively, a depth-1 multiplication with k 1 = k and k 2 = 1 allows for k ciphertexts to be added together for two separate groups, followed by a single multiplication between the two sums. We argue that our proposed model allows for some more flexibility from a theoretical perspective, as we provide for additions both before and after multiplication at each modulus level.

Remark. Algorithm 16 also works for groups of ciphertext inputs of size less than k 1 with arbitrary linear combinations in Step 1. That is, we can compute

ct j Linearcombo ( ct j , 1 , , ct j , k j , α j , 1 , , α j , k j )

so long as k j k 1 for each j . Furthermore, if for each j , we have

ω = 1 k j α j , ω k 1 ,

then Lemmas 4.1, 4.2, and 4.3 still apply.

4.2 Operations in the residue number system

Implementations of homomorphic encryption [28,3133] take advantage of the RNS variants of schemes [19,2123]. In our modified leveled homomorphic schemes, we would require that each q i be chosen coprime to one another in order to use the Chinese remainder theorem. Additions and multiplications are computed componentwise (except for BFV multiplication), which provides major computational advantage over computation modulo large integers. However, algorithms for modulus reductions and relinearization need to be modified to avoid operations in large integers.

Basis conversion in RNS. Suppose q = q 0 q k 1 and p = q k q k + 1 , where q 0 , , q k 1 , q k , q k + 1 are distinct primes. Denote

= ( q 0 , , q k 1 ) , C = ( q k , , q k + 1 )

as two arbitrary ordered sets which we call bases. For an element a R n , q , we denote [ a ] j = 0 k 1 R n , q j as the vector of CRT components of a in basis . That is,

[ a ] = ( a ( 0 ) , a ( 1 ) , , a ( k 1 ) ) = ( a ( j ) ) 0 j k 1

where a ( j ) a mod q j for 0 j < k . We need to compute a mod p , that is, [ a ] C . To do this, let q ˆ j = q q j Z and r j = q ˆ j 1 a ( j ) mod q j for 0 j k 1 , where r j q j 2 . Let

a ˜ = j = 0 k 1 q ˆ j r j .

One can check that a ˜ a ( j ) ( mod q j ) for 0 j k 1 , hence a ˜ a ( mod q ) . Then one can compute a ˜ mod q j for k j k + 1 to get [ a ˜ ] C . This yields Algorithm 17 below from [2123].

Algorithm 17. Fast basis conversion

Conv ( [ a ] , , C )
Input: = ( q 0 , , q k 1 ) with q = q 0 q k 1 ,
C = ( q k , , q k + 1 ) with p = q k q k + 1 ,
[ a ] = ( a ( 0 ) , , a ( k 1 ) ) , RNS representation of a R n , q in basis .
Output: [ a ˜ ] C = ( a ˜ ( 0 ) , , a ˜ ( 1 ) ) , RNS representation of a ˜ R n , p in basis C .
Step 1. For 0 i k 1 , compute r i [ a ( i ) q ˆ i 1 ] q i .
Step 2. For 0 i 1 , compute
a ˜ ( i ) j = 0 k 1 q ˆ j r j q k + i .
Step 3. Return [ a ˜ ] C = ( a ˜ ( 0 ) , , a ˜ ( 1 ) ) .

Lemma 4.4

([2123]) Suppose the input of Algorithm 17 is the RNS representation in basis of an element a R n , q . Then, the output [ a ˜ ] C is the RNS representation in basis C of an element a ˜ R n , p satisfying

a ˜ = a + q e

for some e R n satisfying a ˜ q k 2 and e k 2 .

We note that the bound on e follows from the fact that a ˜ q k 2 . This means that a ˜ is only an approximation of a . There are other fast basis conversions in the literature, which give an exact switch (e.g., see [22]). For our purposes in the analysis of relinearization error, the approximate switching in Algorithm 17 will suffice.

Modulus reduction in RNS. Let p be a factor of Q , say Q = q p . For any polynomial a R n , Q , we need to compute the rounding:

q a Q = a p R n , q .

Suppose q = q 0 q k 1 and p = q k q k + 1 where q 0 , , q k 1 , q k , , q k + 1 are distinct primes. In RNS, a is represented as ( a ( 0 ) , , a ( k + 1 ) ) , where

(4) a a ( i ) ( mod q i ) , 0 i k + 1 .

By the Chinese remainder theorem, the solution a to equation (4) is unique modulo Q . Note that, for any two polynomials a and b with a b ( mod Q ) , we have

q a Q q b Q ( mod q ) .

This is true even if q is not a factor of Q . Hence, we can use any solution a to (4) in the rounding.

Define Q ˆ i = Q q i and

r i Q ˆ i 1 a ( i ) mod q i , 0 i k + 1 ,

where the coefficients of r i are bounded by q i 2 . Then a solution of (4) is

a = i = 0 k + 1 Q ˆ i r i = p i = 0 k 1 q q i r i + q i = k k + 1 p q i r i .

Note that

a p = i = 0 k 1 q q i r i + i = k k + 1 q q i r i .

The first sum has integer coefficients, and we only need to round the second sum. Let

w = i = k k + 1 q q i r i .

Then

(5) a p = i = 0 k 1 q q i r i + w + e ,

where e R [ x ] ( ϕ ( x ) ) with e 1 2 . Also, note that, for 0 j k 1 ,

i = 0 k 1 q q i r i p 1 a ( j ) mod q j .

This gives us the Algorithm 18 that matches Algorithm 1.

Algorithm 18. RNS BFV modulus reduction (v1)

RNS.BFV.Modreduce ( [ a ] D , D , )
Input: D = ( q 0 , , q k + 1 ) ,
= ( q 0 , , q k 1 ) with q = q 0 q k 1 and p = q k q k + 1 ,
[ a ] D = ( a ( 0 ) , , a ( k + 1 ) ) , RNS representation of a R n , Q in basis D .
Output: [ b ] = ( b ( 0 ) , , b ( k 1 ) ) , RNS representation of b R n , q in basis .
Step 1. For k i k + 1 and p ˆ i = p q i , compute
r i [ p ˆ i 1 a ( i ) ] q i .
Step 2. Compute
w i = k k + 1 q q i r i in R n .
Step 3. For 0 j k 1 , compute
b ( j ) [ p 1 a ( j ) + w ] q j .
Step 4. Return [ b ] = ( b ( 0 ) , , b ( k 1 ) ) i = 0 k 1 R n , q i .

The w above gives a rounding error at most 1/2, however, it might be too expensive to compute, as its coefficients are too large. Next, we derive a faster rounding method, with a slightly larger rounding error. Let p ˆ i = p q i and v i p ˆ i 1 a ( i ) mod q i with v i p 2 for k i k + 1 . Then

v = i = k k + 1 p ˆ i v i

satisfies v a ( mod p ) and v ( p ) 2 . Let u = a v p , which has integer coefficients. Then

(6) a p = u + e ,

where e = v p R [ x ] ( ϕ ( x ) ) with e 2 . In RNS, u mod q can be obtained by first computing v mod q j , 0 j k 1 , via basis conversion from p to q . Algorithm 19 describes the procedure, while Lemma 4.5 shows the noise bound. We exclude the proof of Lemma 4.5 from our Appendix, as it is clear from the previous discussion.

Algorithm 19. RNS BFV modulus reduction (v2)

RNS.BFV.Modreduce ( [ a ] D , D , )
Input: D = ( q 0 , , q k + 1 ) ,
= ( q 0 , , q k 1 ) with q = q 0 q k 1 and p = q k q k + 1 ,
[ a ] D = ( a ( 0 ) , , a ( k + 1 ) ) , RNS representation of a R n , Q in basis D .
Output: [ b ] = ( b ( 0 ) , , b ( k 1 ) ) , RNS representation of b R n , q in basis .
Step 1. Let C = D \ = ( q k , , q k + 1 ) . Compute
( v ( 0 ) , , v ( k 1 ) ) Conv ( ( a ( k ) , , a ( k + 1 ) ) , C , ) .
Step 2. For 0 j k 1 , compute
b ( j ) p 1 ( a ( j ) v ( j ) ) mod q j .
Step 3. Return [ b ] = ( b ( 0 ) , , b ( k 1 ) ) i = 0 k 1 R n , q i .

Lemma 4.5

Let the RNS representation of a R n , Q be the input of Algorithm 19 and the RNS representation of b R n , q the output. Then,

a p = b + e

for some e R [ x ] ( ϕ ( x ) ) with e 2 .

Next, we show a BGV modulus reduction in RNS that matches Algorithm 2. When q is a factor of Q , say Q = q p , note that ( a 0 q t 1 ) mod Q is the same as q ( a 0 t 1 mod p ) . Hence, Algorithm 2 can be simplified as follows:

ω a [ a 0 t 1 ] p and ω b [ b 0 t 1 ] p a 0 a 0 + t ω a p q and b 0 b 0 + t ω b p q .

Algorithm 20 describes the above procedure for reduction of one polynomial, while Lemma 4.6 shows the noise bound. We exclude the proof of Lemma 4.6, since it is clear from the discussion and the proof of Lemma 2.3.

Algorithm 20. RNS BGV modulus reduction

RNS.BGV.Modreduce ( [ a ] D , D , )
Input: D = ( q 0 , q 1 , , q k + 1 ) ,
= ( q 0 , , q k 1 ) and p = q k q k + 1 ,
[ a ] D = ( a ( 0 ) , , a ( k + 1 ) ) , RNS representation of a R n , Q in basis D .
Output: [ b ] = ( b ( 0 ) , , b ( k 1 ) ) , RNS representation of b R n , q in basis .
Step 1. For k i k + 1 , compute
w ( i ) [ t 1 a ( i ) ] q i .
Step 2. Let C = D \ = ( q k , , q k + 1 ) . Compute
( u ( 0 ) , , u ( k 1 ) ) Conv ( ( w ( k ) , , w ( k + 1 ) ) , C , ) .
Step 3. For 0 i k 1 , compute
b i [ p 1 ( a ( i ) + t u ( i ) ) ] q i .
Step 4. Return ( b ( 0 ) , , b ( k 1 ) ) i = 0 k 1 R n , q i .

Lemma 4.6

Let the RNS representation of a R n , Q be the input of Algorithm 20 and the RNS representation of b R n , q the output. Then,

a p = b + t e

for some e R [ x ] ( ϕ ( x ) ) with e 2 .

For the CKKS rescaling procedure in RNS, we can again use the same procedure as the fast BFV modulus reduction in Algorithm 19. Said otherwise,

RNS.CKKS.Modreduce = RNS.BFV.Modreduce .

Note here that we specifically use (v2) of the RNS BFV Modulus reduction for RNS CKKS. Likewise, we can also use Lemma 4.5 for RNS CKKS when discussing the noise bound after performing RNS.CKKS.Modreduce .

Relinearization in RNS. For the rest of this section, fix Q = q 0 q and P = p 0 p k 1 with q 0 q , p 0 p k 1 all coprime. For 0 i , let Q i = q 0 q i and fix the ordered bases

= ( q 0 , , q i ) , C = ( p 0 , , p k 1 ) , D = C = ( q 0 , , q i , p 0 , , p k 1 ) .

The goal of our relinearization algorithms in RNS will again be to closely match the previously outlined relinearizations for the classic variants in Algorithms 8 and 13. We first introduce the evaluation key generations for the three schemes, followed by their relinearization procedures. We should note that we only discuss the evaluation key generation here. The generations of other keys (secret and public encryption keys) are similarly RNS versions of Algorithms 3 and 10.

We begin with RNS BFV. The procedure for evaluation key generation is given in Algorithm 21. Observe that this evaluation key generation is the same as the evaluation key generation in Algorithm 3, only computed in RNS.

Algorithm 21. RNS BFV evaluation key generation

RNS.BFV.ek.Keygen ( sk , ( q 0 , , q , p 0 , , p k 1 ) )
Input: sk = s R n , 3 secret key,
( q 0 , , q , p 0 , , p k 1 ) full RNS basis.
Output: ek = ( k ˜ 0 ( j ) , k ˜ 1 ( j ) ) 0 j k + j = 0 R n , q j 2 × j = 0 k 1 R n , p j 2 evaluation key.
Step 1. Sample ( k ˜ 0 ( 0 ) , , k ˜ 0 ( k + ) ) U ( j = 0 R n , q j × j = 0 k 1 R n , p j ) and e ˜ χ ρ .
Step 2. For 0 j compute
k ˜ 1 ( j ) [ k ˜ 0 ( j ) s + [ P ] q j s 2 + e ˜ ] ϕ ( x ) , q j .
Step 3. For 0 j k 1 compute
k ˜ 1 ( + 1 + j ) [ k ˜ 0 ( + 1 + j ) s + e ˜ ] ϕ ( x ) , p j .
Step 4. Return ek = ( k ˜ 0 ( j ) , k ˜ 1 ( j ) ) 0 j k + .

For the RNS BGV scheme, the evaluation key generation is again a similar approach to the original evaluation key generation from Algorithm 9. Algorithm 22 gives the procedure for RNS BGV.

Algorithm 22. RNS BGV evaluation key generation

RNS.BGV.ek.Keygen ( sk , ( q 0 , , q , p 0 , , p k 1 ) )
Input: sk = s R n , 3 secret key,
( q 0 , , q , p 0 , , p k 1 ) full RNS basis.
Output: ek = ( k ˜ 0 ( j ) , k ˜ 1 ( j ) ) 0 j k + j = 0 R n , q j 2 × j = 0 k 1 R n , p j 2 evaluation key.
Step 1. Sample ( k ˜ 0 ( 0 ) , , k ˜ 0 ( k + ) ) U ( j = 0 R n , q j × j = 0 k 1 R n , p j ) and e ˜ χ ρ .
Step 2. For 0 j compute
k ˜ 1 ( j ) [ k ˜ 0 ( j ) s + [ P ] q j s 2 + t e ˜ ] ϕ ( x ) , q j .
Step 3. For 0 j k 1 compute
k ˜ 1 ( + 1 + j ) [ k ˜ 0 ( + 1 + j ) s + t e ˜ ] ϕ ( x ) , p j .
Step 4. Return ek = ( k ˜ 0 ( j ) , k ˜ 1 ( j ) ) 0 j k + .

For the RNS CKKS scheme, the evaluation key generation is exactly the same as the evaluation key generation for RNS BFV:

RNS.CKKS.ek.Keygen = RNS.BFV.ek.Keygen .

We are now ready to discuss the full RNS realinearization procedures. In these algorithms, we assume that we have obtained a vector of components ( c 0 ( j ) , c 1 ( j ) , c 2 ( j ) ) 0 j i j = 0 i R n , q j 3 which are the RNS representation of some ( c 0 , c 1 , c 2 ) R n , Q i 3 that is obtained after initial RNS multiplication for BFV, BGV, or CKKS. The initial RNS multiplication operations for BGV and CKKS are simply computed componentwise modulo the q j ’s. The procedure for BFV is more complicated. Specifically, Step 2 of Algorithm 7 in which we must scale down components without modular reduction is difficult to perform in RNS representation. We refer the reader to the previous studies [22,23] for details for the details on the initial RNS BFV multiplication.

For all three schemes, the RNS linearization procedure is given in Algorithm 23. Here, RNS.Modreduce is the RNS modulus reduction procedure for the chosen scheme, and ek is the corresponding evaluation key for that scheme. For instance, if we choose to run RNS relinearization for BGV, we use RNS.BGV.Modreduce in Step 4 with the corresponding evaluation key ek for BGV. We introduce Lemmas 4.7, 4.8, and 4.9 to prove correctness of the algorithm and our noise bound for RNS variant of BFV, BGV, and CKKS, respectively.

Algorithm 23. RNS relinearization

RNS.Relinearize ( [ ( c 0 , c 1 , c 2 ) ] , [ ek ] D )
Input: [ ( c 0 , c 1 , c 2 ) ] = ( c 0 ( j ) , c 1 ( j ) , c 2 ( j ) ) 0 j i j = 0 i R n , q j 3 ,
[ ek ] D = ( k ˜ 0 ( j ) , k ˜ 1 ( j ) ) 0 j i + k j = 0 i R n , q j 2 × j = 0 k 1 R n , p j 2 evaluation key.
Output: [ ct ] = ( a ( j ) , b ( j ) ) 0 j i RNS ciphertext.
Step 1. Compute ( c ˜ 2 ( 0 ) , , c ˜ 2 ( k 1 ) ) Conv ( ( c 2 ( 0 ) , , c 2 ( i ) ) , , C ) .
Step 2. For 0 j i compute
a ˆ ( j ) [ c 2 ( j ) k ˜ 0 ( j ) ] ϕ ( x ) , q j ,
b ˆ ( j ) [ c 2 ( j ) k ˜ 1 ( j ) ] ϕ ( x ) , q j .
Step 3. For 0 j k 1 compute
a ˆ ( i + 1 + j ) [ c ˜ 2 ( j ) k ˜ 0 ( i + 1 + j ) ] ϕ ( x ) , p j ,
b ˆ ( i + 1 + j ) [ c ˜ 2 ( j ) k ˜ 1 ( i + 1 + j ) ] ϕ ( x ) , p j .
Step 4. Compute
( c ˆ 1 ( 0 ) , , c ˆ 1 ( i ) ) RNS.Modreduce ( ( a ˆ 0 ( 0 ) , , a ˆ 0 ( i + k ) ) , D , ) ,
( c ˆ 0 ( 0 ) , , c ˆ 0 ( i ) ) RNS.Modreduce ( ( b ˆ 0 ( 0 ) , , b ˆ 0 ( i + k ) ) , D , ) .
Step 5. For 0 j i compute
a ( j ) [ c 1 ( j ) + c ˆ 1 ( j ) ] q j ,
b ( j ) [ c 0 ( j ) + c ˆ 0 ( j ) ] q j .
Step 6. Return ct = ( a ( j ) , b ( j ) ) 0 j i .

Lemma 4.7

Let ct be the output of Algorithm 23 and suppose the input ( c 0 ( j ) , c 1 ( j ) , c 2 ( j ) ) 0 j i is the RNS representation of some ( c 0 , c 1 , c 2 ) R n , Q i 3 satisfying

c 0 + c 1 s + c 2 s 2 D Q i [ m 0 m 1 ] ϕ ( x ) , t + e mod ( ϕ ( x ) , Q i )

for e E . If P 6 Q i , δ R 16 , and k > i , then ct is the RNS representation of a BFV ciphertext with noise bounded by E + 1 8 δ R 2 k .

Lemma 4.8

Let ct be the output of Algorithm 23 and suppose the input ( c 0 ( j ) , c 1 ( j ) , c 2 ( j ) ) 0 j i is the RNS representation of some ( c 0 , c 1 , c 2 ) R n , Q i 3 satisfying

c 0 + c 1 s + c 2 s 2 [ m 0 m 1 ] ϕ ( x ) , t + t e mod ( ϕ ( x ) , Q i )

for e E . If P 6 Q i , δ R 16 , and k > i , then ct is the RNS representation of a BGV ciphertext with noise bounded by E + 1 8 δ R 2 k .

Lemma 4.9

Let ct be the output of Algorithm 23 and suppose the input ( c 0 ( j ) , c 1 ( j ) , c 2 ( j ) ) 0 j i is the RNS representation of some ( c 0 , c 1 , c 2 ) R n , Q i 3 satisfying

c 0 + c 1 s + c 2 s 2 m 0 m 1 + e mod ( ϕ ( x ) , Q i ) ,

for e E . If P 6 Q i , δ R 16 , and k > i , then ct is the RNS representation of a CKKS ciphertext with noise bounded by E + 1 8 δ R 2 k .

In addition to the relinearization technique we opt for, we should also mention that there exists versions of the alternate relinearization technique from Section 3.1 in RNS [22,23]. In the RNS version, the element c 2 is essentially expanded into a bit decomposition twice: an expansion in the q j ’s first, and then another expansion in a fixed base B for each component corresponding to each q j . Though this technique is certainly viable, we opt for the outlined technique due to the smaller size of ek . In practice, both relinearization techniques are used together in a method known as hybrid key switching [34]. In practice, the noise bound for hybrid key switching is quite similar to what we have outlined in Lemmas 4.7, 4.8, and 4.9. The bound for hybrid key switching ranges from about E + 3 8 δ R 2 k to E + 10 8 δ R 2 k with our approach, using the noise bound from the study by Kim et al. [16] and practical estimates of d num from [9]. We refer the reader to [16,34] for more details on hybrid key switching.

5 Lattices, security, and attacks

The security of homomorphic encryption schemes is based on the LWE problem over finite fields, which can be reduced to lattice problems. In this section, we will give an overview of these lattice problems as well as various attacks on LWE. As this article is more focused on noise reduction in homomorphic encryption schemes, we only provide a brief overview of security and attacks. For a more in-depth discussion on security, we refer the reader to various sources [24,30,35]. Decision-RLWE can be shown to be as hard as many worst-case lattice problems [36]. There is also a brief mention of security reductions from RLWE to LWE in [30]. We will discuss attacks on classic LWE rather than RLWE, since RLWE problems can be easily converted into LWE problems. Furthermore, all the best attack algorithms are for LWE instead of RLWE.

5.1 Lattices and lattice problems

Let m ≥ 1. A subset Λ R m is called a lattice if Λ is a discrete additive subgroup of R m and each point of Λ is isolated (i.e., no points in Λ are arbitrarily close to each other). In general, a lattice can be generated in the following way: given a matrix B = ( b 1 , , b n ) R m × n with independent columns, a lattice can be defined via

Λ = { y R m : y = B x , x Z n } .

Here, Λ R m is an n -dimensional lattice. We call B a lattice basis. For a lattice Λ defined by lattice basis B , the volume vol ( Λ ) is defined as follows:

vol ( Λ ) = det ( B T B ) ,

which can be proved to be independent of the choice of basis. Let

λ ( Λ ) = min { x 2 : x Λ , x 0 } .

Definitions 5.1, 5.2, and 5.3 describe a few instances of well-studied lattice problems for a lattice Λ [30].

Definition 5.1

(SVP) The shortest vector problem (SVP) is as follows: given a basis B of Λ , find a vector v Λ such that v 2 = λ ( Λ ) .

Definition 5.2

( γ -SVP) The γ -approximate shortest vector problem ( γ -SVP) is as follows: given a basis B of Λ , find a nonzero vector v Λ such that v 2 γ λ ( Λ ) .

Definition 5.3

( γ -GapSVP) The γ -gap shortest vector problem ( γ -GapSVP) is as follows: given a basis B of Λ and a real number r > 0 , output “yes” if λ ( Λ ) r and output “no” if λ ( Λ ) > γ r .

These lattice problems are examples of NP-hard problems in the worst case. Regev [24] provides a reduction from an instance of γ GapSVP to Decision-LWE [35], meaning that LWE is at least as hard as γ -GapSVP. For potential attacks on LWE based schemes, we will discuss attack strategies outlined in the previous studies [24,30].

q -ary Lattices. In lattice-based cryptography, a class of lattices that is particularly important is q -ary lattices. We say Λ is a q -ary lattice if Λ is a lattice such that q Z m Λ Z m . In particular, given a matrix A Z m × n of rank n modulo q, the following are q -ary lattices

Λ q ( A ) = { x Z m : x A y mod q for some y Z n } , Λ q ( A ) = { x Z m : x T A 0 mod q } .

It is also worth mentioning that although the matrix A defines both of these lattices, A is not necessarily a lattice basis for these lattices. To find a lattice basis of Λ q ( A ) , we can perform column operations on A modulo q , permuting the rows if necessary, to obtain a matrix

I n A 1 Z q m × n ,

where I n is an n × n identity matrix and A 1 Z q ( m n ) × n . Let

(7) B = I n 0 A 1 q I m n Z m × m .

Then B has rank m and is a lattice basis for Λ q ( A ) . Since Λ q ( A ) is a full rank lattice, vol ( Λ q ( A ) ) = det ( B ) = q m n .

To obtain a lattice basis for Λ q ( A ) , let A 1 be as mentioned earlier. Note that the solution space for x T A = 0 mod q is spanned by the columns of the matrix

A 1 T I m n Z m × ( m n ) .

Then, a basis for Λ q ( A ) is

B = A 1 T q I n I m n 0 Z m × m .

The volume of this lattice is vol ( Λ q ( A ) ) = q n .

Gaussian heuristic. By the Gaussian Heuristic, for a lattice Λ of rank m , we expect its shortest vector to be of length

m 2 π exp ( 1 ) vol ( Λ ) 1 m

on average [24,37], where exp ( 1 ) = 2.7182 is the exponential function evaluated at 1. As the lattice Λ q ( A ) has volume q m n and A 1 is uniform random in Z q ( m n ) × n , we can use the Gaussian Heuristic and expect the shortest vector in Λ q ( A ) to be of length

(8) m 2 π exp ( 1 ) q 1 n m

on average. Similarly for Λ q ( A ) , we expect its shortest vector to be of length

m 2 π exp ( 1 ) q n m

on average.

5.2 LWE attack strategies

First, recall the LWE problems outlined in Section 2.2. Fix s Z q n , which is secret. We sample e i χ ( Z ) from some desired distribution χ such that e i ρ , where ρ is a desired parameter. Then, we sample a uniform random a i Z q n and calculate b i via b i = [ a i , s + e i ] q . The ordered pair ( a i , b i ) Z q n × Z q is called an LWE sample. The Search-LWE problem is to find s given many LWE samples. The decision-LWE problem is given many samples that are either LWE samples or sampled uniform randomly, decide which distributions the samples are drawn from [2,24]. If we sample m times, we can instead think of the LWE samples as the matrix equation

(9) b A s + e mod q ,

where b Z q m is the vector of b i ’s, A Z q m × n is the matrix of a i ’s, and e Z q m is the vector of e i ’s.

Dual attacks via SVP. To solve Decision-LWE, we employ a dual attack. Let A be the matrix defined in equation (9). In dual attacks, we wish to find a short vector v Λ q ( A ) . Since v , b = v , A s + e = v , e mod q and e is short, v , b is small. If an adversary can find a short vector v Λ q ( A ) , then the adversary can solve the Decision-LWE problem with a fair amount of confidence, since v , b would likely not be small for true random b . Thus, the attacker can distinguish LWE samples from true random samples with advantage. We refer the reader to [8,24,38,39] for more details on dual attacks and the exact advantages based on sizes of e and v .

Primal attacks via SVP. A common attack strategy for Search-LWE is with SVP. Let A be the matrix defined in equation (9) and B be computed from A as in equation (7). Let

B ˜ = B b 0 1 Z ( m + 1 ) × ( m + 1 ) ,

and let Λ ˜ be the lattice defined by B ˜ , which has volume vol ( Λ ˜ ) = q m n . Then equation (9) means that the vector e 1 is in Λ ˜ . By the Gaussian Heuristic in equation (8), we expect the shortest vector in the lattice generated by B to be of length about

m 2 π exp ( 1 ) q 1 n m

since the entries of A 1 are uniform random in Z q . If e 2 is smaller than this, then the shortest vector in Λ ˜ is likely to be e 1 with a significant probability. Thus, when e is small, we can solve SVP for the lattice Λ ˜ to find e , and in turn solve LWE. The error distribution χ is crucial in determining the expected size of e , which we discuss thoroughly in the next subsection. We refer the reader to previous studies [30,39,40] for more details on primal attacks.

Lattice basis reduction algorithms. Several algorithms employ these strategies, as well as others, to solve LWE. Algorithms in practice for solving lattice problems include algorithms such as LLL [41], BKW [42], and BKZ [43], which are discussed thoroughly in previous studies [8] and [24]. We will primarily discuss BKZ, as it seems to currently be the best algorithm for lattice reduction. The basic idea behind BKZ is to solve SVP for sublattices of dimension k , which is known as the block size in BKZ.

Let B = ( b 0 , , b m 1 ) be a lattice basis for Λ , ordered so that b 0 is the shortest vector in B . Then, there is a constant γ 0 so that

b 0 2 = γ 0 m vol ( Λ ) 1 m .

We call γ 0 m the Hermite factor and γ 0 the root-Hermite factor of the lattice basis B . The Hermite factor is crucial in determining cost and runtime of lattice reduction algorithms, especially BKZ. For a block size k in BKZ, Chen [44] shows that it is expected that the algorithm output a lattice basis B with γ 0 satisfying

(10) lim m γ 0 k 2 π exp ( 1 ) ( π k ) 1 k 1 2 ( k 1 ) .

In practice, the limiting factor from equation (10) is used to estimate γ 0 for a finite dimensional lattice [24]. One can compute a lattice basis B with root-Hermite factor γ 0 with the estimate in equation (10). For ease of analysis, γ 0 is approximated by either γ 0 = k 1 2 k or γ 0 = 2 1 k . Albrecht et al. [24] show that for block sizes 50 k 250 , 2 1 k actually approximates the estimate of γ 0 from equation (10) better than k 1 2 k .

Our goal is to obtain a basis with a specific target size for b 0 2 while we are allowed to determine the lattice dimension m , which is the number of LWE samples used. This can be difficult since the relationship between γ 0 and b 0 2 depends on m . In this scenario, one of the earlier estimates for γ 0 (e.g., γ 0 = k 1 2 k or γ 0 = 2 1 k ) can be used to determine m based on a chosen block size k . In practice, the best root-Hermite factor γ 0 that can be obtained via lattice reduction algorithms is about γ 0 1.1011 to γ 0 1.1013 [45], and consistently lower values do not seem currently achievable. For a lattice of volume q n (e.g., Λ q ( A ) ), the study [46] gives the optimal size of m as

m = n log ( q ) log ( γ 0 )

for use in lattice reduction algorithms to obtain the best result.

The study by van de Pol and Smart [45] introduces a technique which does not rely on first knowing γ 0 to find m . Instead, a security level is first chosen, then the best possible γ 0 obtained from BKZ is found based on the security level, a chosen lattice dimension m , and the underlying error sampling distribution (and in particular, its standard deviation). The homomorphic encryption standard and several implementations use χ = D Z , α q with α q = 8 , resulting in a standard deviation of σ 3.2 . In our proposed schemes, χ is a discrete uniform distribution in [ n , n ] with standard deviation

σ = ( 2 n + 1 ) 2 1 12 = 4 n 2 + 4 n 12 n 3 .

Note that in all these techniques, the dual q -ary lattice Λ q ( A ) is used instead of the lattice Λ q ( A ) . Though, Λ q ( A ) and Λ q ( A ) are equivalent to one another up to normalization (see, e.g., [46]).

After deciding on the block size k and optimal dimension m , one can estimate the cost of BKZ as follows. For algorithms to solve SVP for lattices of rank k , the fastest known classical algorithm (using sieving) [47] runs in time 2 0.292 k + o ( k ) . The fastest known quantum algorithm [48] runs in time 2 0.265 k + o ( k ) . With BKZ, we can have up to 8 m calls to an oracle solving SVP using a sieving algorithm, meaning that the total costs for BKZ we can expect are 8 m 2 0.292 k + o ( k ) classically and 8 m 2 0.265 k + o ( k ) quantumly [8].

In summary, to estimate the total cost of attack, an appropriate block size k and lattice dimension m must first be determined based on parameters n, q, and the standard deviation of the chosen error sampling distribution χ . After finding this expected size of e , we can calculate the root-Hermite factor γ 0 , which then allows us to determine the block size k and the total cost of BKZ. Though a thorough security analysis is still to be conducted on our strategy for picking leveled homomorphic encryption parameters, extensive research and estimates have been performed on the best known algorithms for solving LWE as briefly shown above. We point the reader to previous studies [8,9,24,49] for a more complete discussion and further references on security.

6 Conclusion

We have presented a detailed mathematical foundation for the BGV, BFV, and CKKS homomorphic encryption schemes, aligning our work with the functionalities proposed in recent homomorphic encryption standards. By providing protocol algorithms and correctness proofs, we have ensured that these schemes are not only theoretically sound but also practical for implementation. Our proposed improvements, particularly in noise management and leveled homomorphic computation, enhance the efficiency and applicability of these schemes by reducing ciphertext expansion and storage requirements. In future works, we plan to further analyze the impacts of these noise bounds on precision accuracy in CKKS.

  1. Funding information: This work was based upon the work supported by the National Center for Transportation Cybersecurity and Resiliency (TraCR) (a U.S. Department of Transportation National University Transportation Center) headquartered at Clemson University, Clemson, South Carolina, USA. Any opinions, findings, conclusions and recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of TraCR, and the U.S. Government assumes no liability for the contents or use thereof.

  2. Author contributions: Both authors contributed equally to the content, writing, and editing of this article. Both authors have accepted responsibility for the entire content of this manuscript and consented to its submission to the journal, reviewed all the results and approved the final version of the manuscript.

  3. Conflict of interest: The authors state no conflict of interest.

Appendix A Proofs of Lemmas

Proof of Lemma 2.1

By assumption, b 0 a 0 s + e 0 mod ( ϕ ( x ) , Q ) . Therefore, there exists r R n such that

b a 0 s + e 0 + Q r mod ϕ ( x ) .

Since a 0 = q a 0 Q and b 0 = q b 0 Q , there are polynomials ε 1 , ε 2 R [ x ] ( ϕ ( x ) ) such that a 0 = q a 0 Q ε 1 and b 0 = q b 0 Q ε 2 with ε 1 , ε 2 1 2 . Then,

b 0 = q Q b 0 ε 2 q Q a 0 s + q Q e 0 + q r ε 2 mod ϕ ( x ) a 0 s + q Q e 0 + q r ε 2 ε 1 s mod ϕ ( x ) .

Let e 0 = q Q e 0 ε 2 ε 1 s . Then, b 0 a 0 s + e 0 mod ( ϕ ( x ) , q ) . Note that e 0 q Q E + δ R s + 1 2 . By assumption Q q > 2 E δ R s 1 , so e 0 < δ R s .□

Proof of Lemma 2.2

By assumption, we first note that ( a 0 , b 0 ) R n , Q 2 satisfies b 0 a 0 s + D Q m 0 + e 0 mod ( ϕ ( x ) , Q ) . Therefore, there is some r Q R n such that b 0 + a 0 s D Q m 0 + e 0 + Q r Q mod ϕ ( x ) . Let ε Q = Q t D Q , ε q = q t D q , ε 1 = q a 0 Q a 0 , and ε 2 = q b 0 Q b 0 . Then,

b 0 = q b 0 Q ε 2 q a 0 s Q + q D Q Q m 0 + q e 0 Q ε 2 + q r Q mod ϕ ( x ) .

Note that as D Q = Q t ε Q , we have that q D Q Q = q t q ε Q Q . Since q t = D q + ε q , we have q D Q Q = D q + ε q q ε Q Q . Therefore,

b 0 q a 0 s Q + q D Q Q m 0 + q e 0 Q ε 2 + q r Q mod ϕ ( x ) a 0 s ε 1 s + D q m 0 + ( ε q q ε Q Q ) m 0 + q e 0 Q ε 2 + q r Q mod ϕ ( x ) .

Let e 0 = q e 0 Q + ( ε q q ε Q Q ) m 0 ε 2 ε 1 s . Then, b 0 + a 0 s D q m 0 + e 0 mod ( ϕ ( x ) , q ) . Furthermore if Q > q , t ( Q 1 ) , and t ( q 1 ) , then D Q = ( Q 1 ) t and ε Q = 1 t . Similarly, D q = ( q 1 ) t and ε q = 1 t . Then, ε q q ε Q Q = 1 t ( 1 q Q ) < 1 t . So,

e 0 = q e 0 Q + ( ε q q ε Q Q ) m 0 ε 2 ε 1 s q Q e 0 + ε q q ε Q Q t 2 + ε 2 + ε 1 s q Q E + 1 + δ R s 2 .

By assumption Q q > 2 E δ R s 2 , so e 0 < δ R s .□

Proof of Lemma 2.3

First, note that a 0 and b 0 are polynomials with integer coefficients since both q a 0 + t ω a and q b 0 + t ω b are equivalent to 0 modulo Q . Then, we have

(A1) b 0 + a 0 s q b 0 + t ω b Q + q a 0 + t ω a Q s mod ϕ ( x )

(A2) q Q ( b 0 + a 0 s ) + t Q ( ω b + ω a s ) mod ϕ ( x )

(A3) q Q ( m 0 + t e 0 ) + t Q ( ω b + ω a s ) + q r mod ϕ ( x )

(A4) m 0 + t q Q Q t m 0 + q Q e 0 + 1 Q ( ω b + ω a s ) + q r mod ϕ ( x )

(A5) m 0 + t e 0 mod ( ϕ ( x ) , q ) ,

where e 0 = q Q Q t m 0 + q Q e 0 + 1 Q ( ω b + ω a s ) . Since Q > q Q , we have q Q Q t < 1 t . Thus, we have that

e 0 = q Q Q t m 0 + q Q e 0 + 1 Q ( ω b + ω a s ) q Q Q t m 0 + q Q e 0 + 1 Q ω b + 1 Q ω a s 1 2 + q Q E + 1 2 + δ R s 2 = q Q E + 1 + δ R s 2 .

By assumption Q q > 2 E δ R s 2 , so e 0 < δ R s .□

Proof of Lemma 3.1

By assumption, the public key pk = ( k 0 , k 1 ) satisfies

k 1 + k 0 s e mod ( ϕ ( x ) , p 0 q )

for some noise e R n with e ρ . Then,

b 0 + a 0 s k 1 u + e 2 + ( k 0 u + e 1 ) s mod ( ϕ ( x ) , p 0 q ) ( k 0 s + e ) u + e 2 + ( k 0 u + e 1 ) s mod ( ϕ ( x ) , p 0 q ) k 0 s u e u + e 2 + k 0 s u + e 1 s mod ( ϕ ( x ) , p 0 q ) e u + e 2 + e 1 s mod ( ϕ ( x ) , p 0 q ) .

Let e 0 = e u + e 2 + e 1 s . Then, b 0 + a 0 s e 0 mod ( ϕ ( x ) , p 0 q ) . Note that

e 0 = e u + e 2 + e 1 s δ R e u + e 2 + δ R e 1 s δ R 2 u + δ R + δ R 2 s = δ R 2 ( u + s ) + δ R .

Since s = u = 1 , e 0 2 δ R 2 + δ R . By assumption p 0 > 4 δ R 2 + 2 δ R δ R 1 , so BFV.Modreduce ( ct 0 , p 0 q , q ) outputs ct 0 = ( a 0 , b 0 * ) satisfying

b 0 * a 0 s + e 0 mod ( ϕ ( x ) , q )

with e 0 ρ by Lemma 2.1. Since b 0 = [ b 0 * + D q m 0 ] q , we have

b 0 + a 0 s D q m 0 + e 0 mod ( ϕ ( x ) , q ) .

Proof of Lemma 3.2

By assumption, b 0 + a 0 s D q m 0 + e 0 mod ( ϕ ( x ) , q ) with e 0 < ( D q 1 ) 2 . There is a polynomial r R n such that [ b 0 + a 0 s ] ϕ ( x ) , q = D q m 0 + e 0 + q r mod ϕ ( x ) . Thus,

t c = t [ b 0 + a 0 s ] ϕ ( x ) , q = t D q m 0 + t e 0 + t q r = q m 0 m 0 + t e 0 + t q r .

Then,

t c q t = m 0 + m 0 q + t q e 0 + t r t = m 0 + t q ( e 0 1 t m 0 ) t = m 0 .

The last equality follows from the fact that t q ( e 0 1 t m 0 ) = 0 , as e 0 < ( D q 1 ) 2 .□

Proof of Lemma 3.3

By assumption, each ct i = ( a i , b i ) R n , q 2 satisfies

b i + a i s D q m i + e i mod ( ϕ ( x ) , q )

with e i E for all i = 0 , , k 1 . Then,

ct 0 = i = 0 k 1 α i ct i q = i = 0 k 1 α i a i q , i = 0 k 1 α i b i q .

Since t ( q 1 ) and D q = ( q 1 ) t , we have D q t 1 mod q . There exists r R n such that

i = 0 k 1 α i m i = i = 0 k 1 α i m i t + t r

with r M . Then,

i = 0 k 1 α i b i + i = 0 k 1 α i a i s D q i = 0 k 1 α i m i + i = 0 k 1 α i e i mod ( ϕ ( x ) , q ) D q i = 0 k 1 α i m i t + i = 0 k 1 α i e i + D q t r mod ( ϕ ( x ) , q ) D q i = 0 k 1 α i m i t + i = 0 k 1 α i e i r mod ( ϕ ( x ) , q ) .

Let e = i = 0 k 1 α i e i r . Then, we have

i = 0 k 1 α i b i + i = 0 k 1 α i a i s D q i = 0 k 1 α i m i t + e mod ( ϕ ( x ) , q ) .

So, ct 0 is a BFV ciphertext with noise term e and

e i = 0 k 1 α i e i + r M E + M = M ( E + 1 ) .

Proof of Lemma 3.4

By assumption, we have for i = 0,1 ,

(A6) b i + a i s D q m i + e i mod ( ϕ ( x ) , q ) ,

with a i q 2 , b i q 2 , and e i E . We can rewrite (A6) as follows:

(A7) b i + a i s D q m i + e i + q r i mod ϕ ( x ) ,

where

r i 1 q b i + a i s 1 q q 2 + q 2 δ R s 1 2 ( 1 + δ R s ) δ R s .

On the one hand,

( t q ) ( b 0 + a 0 s ) ( b 1 + a 1 s ) ( t q ) ( b 0 b 1 + ( b 1 a 0 + b 0 a 1 ) s + a 0 a 1 s 2 ) mod ϕ ( x )

( t q ) ( c 0 + c 1 s + c 2 s 2 ) mod ϕ ( x ) c 0 + c 1 s + c 2 s 2 + ( ε 0 + ε 1 s + ε 2 s 2 ) mod ϕ ( x ) ,

with ε i 1 2 for 0 i 2 . Let r m R n so that m 0 m 1 = [ m 0 m 1 ] ϕ ( x ) , t + t r m with r m t δ R 4 . Then on the other hand, noting that ( t q ) D q = ( t q ) ( q t 1 t ) = 1 1 q and t D q 1 mod q , we have

( t q ) ( D q m 0 + e 0 + q r 0 ) ( D q m 1 + e 1 + q r 1 ) ( t q ) ( D q 2 m 0 m 1 + D q ( m 0 e 1 + m 1 e 0 ) + q ( e 0 r 1 + r 0 e 1 ) + e 0 e 1 + q D q ( m 0 r 1 + r 0 m 1 ) + q 2 r 0 r 1 ) mod ϕ ( x ) ( 1 1 q ) D q [ m 0 m 1 ] ϕ ( x ) , t + ( 1 1 q ) D q t r m + ( 1 1 q ) ( m 0 e 1 + m 1 e 0 ) + t ( e 0 r 1 + r 0 e 1 ) + ( t q ) e 0 e 1 + t D q ( m 0 r 1 + r 0 m 1 ) + t q r 0 r 1 mod ϕ ( x ) D q [ m 0 m 1 ] ϕ ( x ) , t ( D q q ) [ m 0 m 1 ] ϕ ( x ) , t + D q t r m ( D q t q ) r m + ( m 0 e 1 + m 1 e 0 ) ( 1 q ) ( m 0 e 1 + m 1 e 0 ) + t ( e 0 r 1 + r 0 e 1 ) + ( t q ) e 0 e 1 + t D q ( m 0 r 1 + r 0 m 1 ) + t q r 0 r 1 mod ϕ ( x ) D q [ m 0 m 1 ] ϕ ( x ) , t r m + ( m 0 e 1 + m 1 e 0 ) + t ( e 0 r 1 + r 0 e 1 ) ( m 0 r 1 + r 0 m 1 ) ( D q q ) [ m 0 m 1 ] ϕ ( x ) , t ( 1 q ) ( m 0 e 1 + m 1 e 0 ) + ( t q ) e 0 e 1 mod ( ϕ ( x ) , q ) .

Let

ε 1 = ε 0 + ε 1 s + ε 2 s 2 and ε 2 = r m + ( m 0 e 1 + m 1 e 0 ) + t ( e 0 r 1 + r 0 e 1 ) ( m 0 r 1 + r 0 m 1 ) ( D q q ) [ m 0 m 1 ] ϕ ( x ) , t ( 1 q ) ( m 0 e 1 + m 1 e 0 ) + ( t q ) e 0 e 1 .

Let e = ε 2 ε 1 . Then,

c 0 + c 1 s + c 2 s 2 D q [ m 0 m 1 ] ϕ ( x ) , t + e mod ( ϕ ( x ) , q ) .

We now turn to the noise bound for e . First, note that

ε 1 = ε 0 + ε 1 s + ε 2 s 2 1 2 + δ R s 2 + δ R 2 s 2 2 = ( 1 + δ R s ) 2 2 .

To simplify the noise analysis for ε 2 , we break down ε 2 into two pieces. Let

ω 1 = r m + ( m 0 e 1 + m 1 e 0 ) + t ( e 0 r 1 + r 0 e 1 ) ( m 0 r 1 + r 0 m 1 ) and ω 2 = ( D q q ) [ m 0 m 1 ] ϕ ( x ) , t ( D q t q ) r m ( 1 q ) ( m 0 e 1 + m 1 e 0 ) + ( t q ) e 0 e 1 .

Then, ε 2 = ω 1 + ω 2 . For ω 1 , we have

ω 1 r m + m 0 e 1 + m 1 e 0 + t e 0 r 1 + r 0 e 1 + m 0 r 1 + r 0 m 1 t δ R 4 + E t δ R + 2 E t δ R 2 s + t δ R 2 s .

For ω 2 , note that since D q < q t and q > 2 t , we have

ω 2 D q q [ m 0 m 1 ] ϕ ( x ) , t + D q t q r m + 1 q m 0 e 1 + m 1 e 0 + t q e 0 e 1 < 1 t [ m 0 m 1 ] ϕ ( x ) , t + r m + 1 q m 0 e 1 + m 1 e 0 + t q e 0 e 1 1 2 + t δ R 4 + E t δ R q + E δ R 2 1 2 + t δ R 4 + E δ R .

The bound on t q e 0 e 1 follows from the fact that since e 0 , e 1 are ciphertext noise terms, so by assumption e i E < ( D q 1 ) 2 < D q 2 < q ( 2 t ) for i = 0 , 1 . So,

t q e 0 e 1 t q E 2 δ R < E δ R 2 .

By assumption, δ R 16 , E 1 , t 2 , and s = 1 . So,

e t δ R 4 + E t δ R + 2 E t δ R 2 s + t δ R 2 s + 1 2 + t δ R 4 + E δ R + ( δ R s + 1 ) 2 2 t δ R 2 + E t δ R + 3 E t δ R 2 s + 1 + E δ R + δ R 2 s 2 2 + δ R s = E t δ R 2 s 2 1 E δ R s 2 + 1 δ R 2 s 2 + 3 s + 1 E t δ R 2 s 2 + 1 t δ R s 2 + 1 2 E t + 1 E t δ R s E t δ R 2 s 2 1 16 + 1 1 6 2 + 3 + 1 2 1 6 2 + 1 2 16 + 1 2 2 + 1 2 16 < 3.5 E t δ R 2 s 2 = 3.5 t E ρ 2 .

Proof of Lemma 3.5

Notice that as β 1 c 2 k 1 mod ( ϕ ( x ) , p 1 q ) , we can write

β 1 c 2 k 1 + p 1 q α 1 mod ϕ ( x )

for some α 1 Z . Then,

d 1 = β 1 p 1 c 2 k 1 + p 1 q α 1 p 1 c 2 k 1 p 1 + q α 1 + ε 1 c 2 k 1 p 1 + ε 1 mod ( ϕ ( x ) , q )

for some ε 1 R [ x ] ( ϕ ( x ) ) with ε 1 1 2 . Note also that as c 2 k 0 c 2 k 1 s + c 2 e 1 + c 2 p 1 s 2 mod ( ϕ ( x ) , p 1 q ) , we have that for some α 0 Z ,

β 0 c 2 k 0 c 2 k 1 s + c 2 e 1 + c 2 p 1 s 2 + p 1 q α 0 mod ϕ ( x ) .

Then,

d 0 = β 0 p 1 c 2 k 1 s + c 2 e 1 + c 2 p 1 s 2 + p 1 q α 0 p 1 mod ( ϕ ( x ) , q ) c 2 k 1 s p 1 + c 2 e 1 p 1 + c 2 s 2 + q α 0 mod ( ϕ ( x ) , q ) c 2 k 1 s p 1 + c 2 e 1 p 1 + c 2 s 2 + q α 0 + ε 0 mod ( ϕ ( x ) , q ) c 2 k 1 s p 1 + c 2 e 1 p 1 + c 2 s 2 + ε 0 mod ( ϕ ( x ) , q )

for some ε 0 R [ x ] ( ϕ ( x ) ) with ε 0 1 2 . Therefore,

d 0 + d 1 s c 2 s 2 c 2 k 1 s p 1 + c 2 e 1 p 1 + ε 0 + c 2 k 1 s p 1 + ε 1 s mod ( ϕ ( x ) , q ) c 2 s 2 + c 2 e 1 p 1 + ε 0 + ε 1 s mod ( ϕ ( x ) , q )

Let e = c 2 e 1 p 1 + ε 0 + ε 1 s . Then, d 0 + d 1 s c 2 s 2 + e mod ( ϕ ( x ) , q ) . By assumption, ( c 0 , c 1 , c 2 ) satisfies

c 0 + c 1 s + c 2 s 2 D q [ m 0 m 1 ] ϕ ( x ) , t + e mod ( ϕ ( x ) , q ) .

Let e * = e + e . Then,

c 1 + c 0 s c 0 + d 0 + c 1 s + d 1 s mod ( ϕ ( x ) , q ) c 0 + c 1 s + c 2 s 2 + e mod ( ϕ ( x ) , q ) D q [ m 0 m 1 ] ϕ ( x ) , t + e + e mod ( ϕ ( x ) , q ) D q [ m 0 m 1 ] ϕ ( x ) , t + e * mod ( ϕ ( x ) , q ) .

Now, we turn to the noise bounds on e and e * . Note that as e 1 χ ρ , we have e 1 ρ = δ R s . If p 1 6 q and δ R 16 , then

e = c 2 e 1 p 1 + ε 0 + ε 1 s q 2 p 1 δ R 2 s + 1 2 + δ R s 2 δ R 2 s ( 1 12 + 1 δ R 2 s + 1 2 δ R ) < 1 8 δ R 2 s .

By Lemma 7, we have that e 3.5 E t ρ 2 = 3.5 E t δ R 2 s 2 . As relinearization introduces additional noise e bounded by 1 8 δ R 2 s , we have

e * 3.5 E t δ R 2 s 2 + 1 8 δ R 2 s = E t δ R 2 s 2 3.5 + 1 8 E t s E t δ R 2 s 2 3.5 + 1 16 < 3.6 E t δ R 2 s 2 = 3.6 E t ρ 2 .

Proof of Lemma 3.6

By assumption, the public key pk = ( k 0 , k 1 ) satisfies

k 1 + k 0 s t e mod ( ϕ ( x ) , p 0 q )

for some noise e R n with e ρ . Then,

b 0 + a 0 s k 1 u + t e 2 + ( k 0 u + t e 1 ) s mod ( ϕ ( x ) , p 0 q ) ( k 0 s + t e ) u + t e 2 + ( k 0 u + t e 1 ) s mod ( ϕ ( x ) , p 0 q ) k 0 s u t e u + t e 2 + k 0 s u + t e 1 s mod ( ϕ ( x ) , p 0 q ) t ( e u + e 2 + e 1 s ) mod ( ϕ ( x ) , p 0 q ) t e 0 mod ( ϕ ( x ) , p 0 q ) .

Let e 0 = e u + e 2 + e 1 s . Then, b 0 + a 0 s t e 0 mod ( ϕ ( x ) , p 0 q ) . Since s = u = 1 , we have e 0 2 δ R 2 + δ R . By assumption p 0 > 4 δ R 2 + 2 δ R δ R 2 , so BGV.Modreduce ( ct 0 , p 0 q , q ) outputs ct 0 = ( a 0 , b 0 * ) satisfying

b 0 * a 0 s + t e 0 mod ( ϕ ( x ) , q )

with e 0 ρ by Lemma 2.3. Since b 0 = [ b 0 * + m 0 ] q , we have

b 0 + a 0 s m 0 + t e 0 mod ( ϕ ( x ) , q ) .

Proof of Lemma 3.7

By assumption, we have for i = 0,1 ,

b i + a i s m i + t e i mod ( ϕ ( x ) , q )

with e i E . Let r m R n such that m 0 m 1 = [ m 0 m 1 ] ϕ ( x ) , t + t r m . Note that r m t . We have that

c 0 + c 1 s + c 2 s 2 b 0 b 1 + ( b 1 a 0 + b 0 a 1 ) s + a 0 a 1 s 2 mod ( ϕ ( x ) , q ) ( b 0 + a 0 s ) ( b 1 + a 1 s ) mod ( ϕ ( x ) , q ) ( m 0 + t e 0 ) ( m 1 + t e 1 ) mod ( ϕ ( x ) , q ) m 0 m 1 + t ( m 0 e 1 + m 1 e 0 + t e 0 e 1 ) mod ( ϕ ( x ) , q ) [ m 0 m 1 ] ϕ ( x ) , t + t ( m 0 e 1 + m 1 e 0 + t e 0 e 1 + r m ) mod ( ϕ ( x ) , q ) .

Let e = m 0 e 1 + m 1 e 0 + t e 0 e 1 + r m . Then,

c 0 + c 1 s + c 2 s 2 [ m 0 m 1 ] ϕ ( x ) , t + t e mod ( ϕ ( x ) , q )

with

e δ R t E + δ R t E 2 + t 2 δ R t E 2 + t 2 δ R t ( E 2 + 1 ) .

Proof of Lemma 3.8

First, observe that both β 0 + t ω 0 and β 1 + t ω 1 are divisible by p 1 since for i = 0 , 1,

β i + t ω i = β i + t [ t 1 β i ] p 1 β i + t ( t 1 β i ) 0 mod p 1 ,

so d 0 = β 0 + t ω 0 p 1 and d 1 = β 1 + t ω 1 p 1 have integer coefficients. Then,

d 0 + d 1 s β 0 + t ω 0 p 1 + β 1 + t ω 1 p 1 s mod ( ϕ ( x ) , q ) c 2 k 0 + ω 0 p 1 + c 2 k 1 + ω 1 p 1 s mod ( ϕ ( x ) , q ) c 2 ( k 1 s + p 1 s 2 + t e 1 ) + t ω 0 p 1 + c 2 k 1 + t ω 1 p 1 s mod ( ϕ ( x ) , q ) c 2 s 2 + t c 2 e 1 + ω 0 + ω 1 s p 1 mod ( ϕ ( x ) , q ) .

Let e = c 2 e 1 p 1 + ω 0 + ω 1 s p 1 , which has integer coefficients. Then, d 0 + d 1 s c 2 s 2 + t e . By assumption, ( c 0 , c 1 , c 2 ) satisfies

c 0 + c 1 s + c 2 s 2 [ m 0 m 1 ] ϕ ( x ) , t + t e mod ( ϕ ( x ) , q ) ,

where e 2 δ R t ( E 2 + 1 ) . Let e * = e + e . Then,

c 1 + c 0 s c 0 + d 0 + c 1 s + d 1 s mod ( ϕ ( x ) , q ) c 0 + c 1 s + c 2 s 2 + t e mod ( ϕ ( x ) , q ) [ m 0 m 1 ] ϕ ( x ) , t + t ( e + e ) mod ( ϕ ( x ) , q ) [ m 0 m 1 ] ϕ ( x ) , t + t e * mod ( ϕ ( x ) , q ) .

Then, note that

e c 2 e 1 p 1 + ω 0 + ω 1 s p 1 q 2 p 1 δ R 2 s + p 1 2 p 1 + p 1 2 p 1 δ R s 1 12 δ R 2 s + 1 2 + δ R s 2 = δ R 2 s ( 1 12 + 1 2 δ R 2 s + 1 2 δ R )

δ R 2 s ( 1 12 + 1 512 + 1 32 ) < 1 8 δ R 2 s .

The bound on e * then follows immediately.□

Proof of Lemma 3.9

By assumption, we first note that ( a 0 , b 0 ) R n , Q 2 satisfies

b 0 a 0 s + m 0 + e 0 mod ( ϕ ( x ) , Q ) .

Therefore, there is some integer r Z such that b 0 + a 0 s m 0 + e 0 + Q r mod ϕ ( x ) . Let ε 1 = q a 0 Q a 0 , and ε 2 = q b 0 Q b 0 . Then,

b 0 = q b 0 Q ε 2 q a 0 s Q + q Q m 0 + q e 0 Q ε 2 + q r mod ϕ ( x ) a 0 s ε 1 s + q Q m 0 + q e 0 Q ε 2 + q r mod ϕ ( x ) .

Let e 0 = q Q e 0 ε 2 ε 1 s . Then, b 0 + a 0 s q Q m 0 + e 0 mod ( ϕ ( x ) , q ) . Therefore,

e 0 q Q E + 1 + δ R s 2 .

By assumption Q q > 2 E δ R s 1 , so e 0 < δ R s .□

Proof of Lemma 3.10

We consider step 2 of Algorithm 14, in which we compute ct BFV.Encrypt ( m , 1 , pk ) . By an argument identical to the proof of Lemma 3.1, steps 1–3 of Algorithm 4 produce an ordered pair ( a 0 , b 0 ) R n , p 0 q 2 satisfying

b 0 + a 0 s e 0 mod ( ϕ ( x ) , p 0 q )

for e 0 R n with e 0 2 δ R 2 + δ R . By assumption p 0 > 4 δ R 2 + 2 δ R δ R 1 , so ( a 0 , b 0 * ) CKKS.Modreduce ( p 0 q , q , ct 0 ) satisfies

b 0 * a 0 s + e 0 mod ( ϕ ( x ) , q )

with e 0 ρ by Lemma 3.9. Since b 0 = [ b 0 * + m ] q , we have

b 0 + a 0 s m + e 0 mod ( ϕ ( x ) , q ) .

Let ( a , b ) = ( a 0 , b 0 ) . Then, ct = ( a , b ) is the output of Algorithm 14 and is a CKKS ciphertext with noise bounded by ρ .□

Proof of Lemma 3.11

By assumption, each ct i = ( a i , b i ) R n , q 2 satisfies

b i + a i s m i + e i mod ( ϕ ( x ) , q )

for some noise term e i with e i E . Then,

ct 0 = i = 0 k 1 α i ct i q = i = 0 k 1 α i a i q , i = 0 k 1 α i b i q .

Let e = i = 0 k 1 α i e i . Then,

i = 0 k 1 α i b i + i = 0 k 1 α i a i s i = 0 k 1 α i m i + i = 0 k 1 α i e i i = 0 k 1 α i m i + e mod ( ϕ ( x ) , q ) .

So, ct 0 is a CKKS ciphertext with noise term e and

e i = 0 k 1 α i e i M E .

Proof of Lemma 3.12

By assumption, we have for i = 0 , 1 ,

b i + a i s m i + e i mod ( ϕ ( x ) , q )

with e i E . Let e = m 0 e 1 + m 1 e 0 + e 0 e 1 . Then,

c 0 + c 1 s + c 2 s 2 = ( b 0 + a 0 s ) ( b 1 + a 1 s ) ( m 0 + e 0 ) ( m 1 + e 1 ) mod ( ϕ ( x ) , q ) m 0 m 1 + m 0 e 1 + m 1 e 0 + e 0 e 1 mod ( ϕ ( x ) , q ) m 0 m 1 + e mod ( ϕ ( x ) , q ) .

The bound on e then follows immediately.□

Proof of Lemma 3.13

It is clear from the proof of Lemma 3.5 that d 0 + d 1 s c 2 s 2 + e mod ( ϕ ( x ) , q ) where e = c 2 e 1 p 1 + ε 0 + ε 1 s and ε 0 , ε 1 R [ x ] ( ϕ ( x ) ) with ε 0 1 2 and ε 1 1 2 . By assumption, c 0 + c 1 s + c 2 s 2 m 0 m 1 + e mod ( ϕ ( x ) , q ) with e E t δ R + E 2 δ R , so

c 1 + c 0 s c 0 + d 0 + c 1 s + d 1 s mod ( ϕ ( x ) , q ) c 0 + c 1 s + c 2 s 2 + e mod ( ϕ ( x ) , q ) m 0 m 1 + e + e mod ( ϕ ( x ) , q )

Let e * = e + e . Then, c 1 + c 0 s m 0 m 1 + e * mod ( ϕ ( x ) , q ) . The bound on e * follows immediately.□

Proof of Lemma 4.1

Suppose we have a collection of BFV ciphertexts, each with noise bounded by ρ = δ R s . By Lemma 3.3, computing

ct j Linearcombo ( ct j , 1 , , ct j , k 1 , 1 , , 1 )

results in BFV ciphertexts ct j for j = 1 , , 2 k 2 , each with noise bounded by k 1 δ R s + k 1 . Since δ R 16 , we have

k 1 δ R s + k 1 = k 1 + k 1 δ R s δ R s 17 16 k 1 δ R s .

By Lemma 3.4, each Multiply ( ct 2 j 1 , ct 2 j ) in step 2 of Algorithm 16 for j = 1 , , k 2 results in a polynomial triple with noise bounded by

3.5 17 16 k 1 δ R s t δ R 2 s 2 = 119 32 k 1 t δ R 3 s 3 .

Summing all k 2 of these polynomial triples in step 2 of Algorithm 16 results in a polynomial triple with noise bounded by

119 32 k 1 k 2 t δ R 3 s 3 + k 2

by an equivalent argument to Lemma 3.3 with polynomial triples as the input. Notice,

119 32 k 1 k 2 t δ R 3 s 3 + k 2 = 119 32 k 1 k 2 t δ R 3 s 3 1 + 32 119 k 1 t δ R 3 s 3 119 32 k 1 k 2 t δ R 3 s 3 1 + 32 119 2 1 6 3 30 8 k 1 k 2 t δ R 3 s 3 .

By the proof of Lemma 3.5, BFV.Relinearize introduces additional noise of at most 1 8 δ R 2 s . So, after performing relinearization in step 3 of Algorithm 16, we have a BFV ciphertext with noise bounded by

30 8 k 1 k 2 t δ R 3 s 3 + 1 8 δ R 2 s = 30 8 k 1 k 2 t δ R 3 s 3 1 + 1 30 k 1 k 2 t δ R s 2

30 8 k 1 k 2 t δ R 3 s 3 1 + 1 30 2 16 31 8 k 1 k 2 t δ R 3 s 3 .

So, a worst-case noise bound for a depth-1 multiplication is given by 31 8 k 1 k 2 t δ R 3 s 3 . Since δ R 2 7 8 δ R and δ R s 2 7 8 δ R s , we have

2 ( 31 8 k 1 k 2 t δ R 3 s 3 ) δ R s 2 62 8 k 1 k 2 t δ R 3 s 3 7 8 δ R s < 9 k 1 k 2 t δ R 2 s 2 .

As δ R = n and s = 1 , we have that 9 k 1 k 2 t δ R 2 s 2 = 9 k 1 k 2 t n 2 < q i . By Lemma 2.2, BFV modulus reduction from Q i to Q i 1 gives a new ciphertext with noise bounded by ρ . Thus, the lemma is proved.□

Proof of Lemma 4.2

Suppose we have a collection of BGV ciphertexts, each with noise bounded by ρ = δ R s . By a similar argument to Lemma 3.3, computing

ct j Linearcombo ( ct j , 1 , , ct j , k 1 , 1 , , 1 )

results in BGV ciphertexts ct j for j = 1 , , 2 k 2 , each with noise bounded by k 1 δ R s + k 1 . Since δ R 16 , we have

k 1 δ R s + k 1 = ( k 1 + k 1 δ R s ) δ R s 17 16 k 1 δ R s .

By Lemma 3.7, each Multiply ( ct 2 j 1 , ct 2 j ) in step 2 of Algorithm 16 for j = 1 , , k 2 results in a polynomial triple with noise bounded by

2 t δ R 17 16 k 1 δ R s 2 + 1 = 2 t δ R 289 256 k 1 2 δ R 2 s 2 + 1 .

Then,

2 t δ R 289 256 k 1 2 δ R 2 s 2 + 1 = 289 128 t k 1 2 δ R 3 s 2 1 + 256 289 k 1 2 δ R 2 s 2 289 128 t k 1 2 δ R 3 s 2 1 + 1 289 = 290 128 t k 1 2 δ R 3 s 2 .

Summing all k 2 of these polynomial triples in step 2 of Algorithm 16 results in a polynomial triple with noise bounded by

290 128 k 1 2 k 2 t δ R 3 s 3 + k 2 .

Notice,

290 128 k 1 2 k 2 t δ R 3 s 3 + k 2 = 290 128 k 1 2 k 2 t δ R 3 s 3 1 + 128 290 k 1 2 t δ R 3 s 3 = 290 128 k 1 2 k 2 t δ R 3 s 3 1 + 128 290 2 1 6 3 290 128 k 1 2 k 2 t δ R 3 s 3 1 + 1 18560 13 8 k 1 2 k 2 t δ R 3 s 3 .

By the proof of Lemma 3.8, BGV.Relinearize introduces additional noise of at most 1 8 δ R 2 s 2 . So, after performing relinearization in step 3 of Algorithm 16, we have a BGV ciphertext with noise bounded by

13 8 k 1 2 k 2 t δ R 3 s 3 + 1 8 δ R 2 s 2 = 13 8 k 1 2 k 2 t δ R 3 s 3 1 + 1 13 k 1 2 k 2 t δ R s 2 13 8 k 1 2 k 2 t δ R 3 s 3 1 + 1 416 14 8 k 1 2 k 2 t δ R 3 s 3 .

So, a worst-case noise bound for a depth-1 multiplication is given by 14 8 k 1 2 k 2 t δ R 3 s 3 . Since δ R 2 7 8 δ R and δ R s 2 7 8 δ R s , we have

2 ( 14 8 k 1 2 k 2 t δ R 3 s 3 ) δ R s 2 28 8 k 1 2 k 2 t δ R 3 s 3 7 8 δ R s = 4 k 1 2 k 2 t δ R 2 s 2 .

As δ R = n and s = 1 , we have that 4 k 1 2 k 2 t δ R 2 s 2 = 4 k 1 2 k 2 t n 2 < q i . By Lemma 2.3, BGV modulus reduction from Q i to Q i 1 gives a new ciphertext with noise bounded by ρ . Thus, the lemma is proved.□

Proof of Lemma 4.3

Suppose we have a collection of CKKS ciphertexts, each with noise bounded by E . Recall that δ R s = n . By Lemma 3.11, computing

ct j Linearcombo ( ct j , 1 , , ct j , k 1 , 1 , , 1 )

results in CKKS ciphertexts ct j for j = 1 , , 2 k 2 , each with noise bounded by k 1 E . Let t = 2 Δ Z + 1 , and observe that for each encoding m j of message z j ,

m j Δ z j + 1 2 Δ Z + 1 2 = t 2 .

By Lemma 3.12, each Multiply ( ct 2 j 1 , ct 2 j ) in Step 2 of Algorithm 16 for j = 1 , , k 2 results in a polynomial triple with noise bounded by

k 1 E t n + k 1 2 E 2 n = k 1 E ( 2 Δ Z + 1 ) n + k 1 2 E 2 n .

Summing all k 2 of these polynomial triples in Step 2 of Algorithm 16 results in a polynomial triple with noise bounded by

k 1 k 2 E ( 2 Δ Z + 1 ) n + k 1 2 k 2 E 2 n

by an equivalent argument to Lemma 3.11 with polynomial triples as the input. By the proof of Lemma 3.13, CKKS.Relinearize introduces additional noise of at most 1 8 n 2 . So, after performing relinearization in Step 3 of Algorithm 16, we have a CKKS ciphertext with noise bounded by

k 1 k 2 E ( 2 Δ Z + 1 ) n + k 1 2 k 2 E 2 n + 1 8 n 2 .

So, a worst-case noise bound for a depth-1 multiplication is given by k 1 k 2 E ( 2 Δ Z + 1 ) n + k 1 2 k 2 E 2 n + 1 8 n 2 . Performing a rescaling operation then gives noise bounded by

2 k 1 k 2 E Δ Z n Δ + k 1 k 2 E n Δ + k 1 2 k 2 E 2 n Δ + n 2 8 Δ 2 k 1 k 2 n E Z + k 1 k 2 E n Δ + k 1 2 k 2 E 2 n Δ + 1 8 .

Proof of Lemma 4.7

By assumption the input of Algorithm 23 is the RNS representation in basis of some ( c 0 , c 1 , c 2 ) R n , Q i 3 satisfying

c 0 + c 1 s + c 2 s 2 D Q i [ m 0 m 1 ] ϕ ( x ) , t + e mod ( ϕ ( x ) , Q i ) .

for e E . From Step 1, ( c ˜ 2 ( 0 ) , , c ˜ 2 ( k 1 ) , c 2 ( 0 ) , , c 2 ( i ) ) is the RNS representation of c ˜ 2 R n , P Q i in basis D satisfying c ˜ 2 = c 2 + Q i e for some e R n with c ˜ 2 Q i ( i + 1 ) 2 by Lemma 4.4. After Step 3, note ( a ˆ ( j ) , b ˆ ( j ) ) 0 j i + k is the RNS representation of some ( a ˆ , b ˆ ) R n , P Q i 2 satisfying

b ˆ + a ˆ s c ˜ 2 k ˜ 1 + c ˜ 2 k ˜ 0 s mod ( ϕ ( x ) , P Q i ) c ˜ 2 ( k ˜ 0 s + P s 2 + e ˜ ) + c ˜ 2 k ˜ 0 s mod ( ϕ ( x ) , P Q i ) ( c 2 + Q i e ) ( k ˜ 0 s + P s 2 + e ˜ ) + ( c 2 + Q i e ) k ˜ 0 s mod ( ϕ ( x ) , P Q i ) ( c 2 + Q i e ) ( P s 2 + e ˜ ) mod ( ϕ ( x ) , P Q i ) c 2 P s 2 + c 2 e ˜ + P Q i e s 2 + Q i e e ˜ mod ( ϕ ( x ) , P Q i ) c 2 P s 2 + c 2 e ˜ + Q i e e ˜ mod ( ϕ ( x ) , P Q i ) c 2 P s 2 + e ˆ mod ( ϕ ( x ) , P Q i )

for e ˆ = c 2 e ˜ + Q i e e ˜ = c ˜ 2 e ˜ . Note that as e ˜ χ ρ , we have e ˆ = c ˜ 2 e ˜ Q i ( i + 1 ) ρ 2 2 . Furthermore, there exists ω R n such that

b ˆ + a ˆ s c 2 P s 2 + e ˆ + ω P Q i mod ϕ ( x ) .

By Lemma 4.5, Step 4 returns the RNS representation of some c ˆ 0 R n , Q i and c ˆ 1 R n , Q i satisfying

c ˆ 0 = b ˆ P + e ˆ 0 , c ˆ 1 = a ˆ P + e ˆ 1

with e ˆ 0 k 2 and e ˆ 1 k 2 . Finally, Step 6 returns the RNS representation of ( a , b ) R n , Q i 2 , which satisfies

b + a s c 0 + c ˆ 0 + ( c 1 + c ˆ 1 ) s mod ( ϕ ( x ) , Q i ) c 0 + c 1 s + P 1 ( b ˆ + a ˆ s ) + e ˆ 0 + e ˆ 1 s mod ( ϕ ( x ) , Q i ) c 0 + c 1 s + P 1 ( c 2 P s 2 + e ˆ + ω P Q i ) + e ˆ 0 + e ˆ 1 s mod ( ϕ ( x ) , Q i ) c 0 + c 1 s + c 2 s 2 + P 1 e ˆ + ω Q i + e ˆ 0 + e ˆ 1 s mod ( ϕ ( x ) , Q i ) D Q i [ m 0 m 1 ] ϕ ( x ) , t + e + P 1 e ˆ + e ˆ 0 + e ˆ 1 s mod ( ϕ ( x ) , Q i ) D Q i [ m 0 m 1 ] ϕ ( x ) , t + e * mod ( ϕ ( x ) , Q i )

for e * = e + P 1 e ˆ + e ˆ 0 + e ˆ 1 s . We now turn to the noise term e * . If P 6 Q i , δ R 16 , and k > i , then

e * e + P 1 e ˆ + d ˆ 0 s + d ˆ 1 e + Q i ( i + 1 ) 2 P δ R 2 s + k 2 + k δ R s 2 e + δ R 2 k ( 1 12 + 1 δ R 2 + 1 2 δ R ) E + 1 8 δ R 2 k .

Proof of Lemma 4.8

By assumption the input of Algorithm 23 is the RNS representation in basis of some ( c 0 , c 1 , c 2 ) R n , Q i 3 satisfying

c 0 + c 1 s + c 2 s 2 [ m 0 m 1 ] ϕ ( x ) , t + t e mod ( ϕ ( x ) , Q i ) .

for e E . From Step 1, ( c ˜ 2 ( 0 ) , , c ˜ 2 ( k 1 ) , c 2 ( 0 ) , , c 2 ( i ) ) is the RNS representation of c ˜ 2 R n , P Q i in basis D satisfying c ˜ 2 = c 2 + Q i e for some e R n with c ˜ 2 Q i ( i + 1 ) 2 by Lemma 4.4. After Step 3, note ( a ˆ ( j ) , b ˆ ( j ) ) 0 j i + k

is the RNS representation of some ( a ˆ , b ˆ ) R n , P Q i 2 satisfying

b ˆ + a ˆ s c ˜ 2 k ˜ 1 + c ˜ 2 k ˜ 0 s mod ( ϕ ( x ) , P Q i ) c ˜ 2 ( k ˜ 0 s + P s 2 + t e ˜ ) + c ˜ 2 k ˜ 0 s mod ( ϕ ( x ) , P Q i ) ( c 2 + Q i e ) ( k ˜ 0 s + P s 2 + t e ˜ ) + ( c 2 + Q i e ) k ˜ 0 s mod ( ϕ ( x ) , P Q i ) ( c 2 + Q i e ) ( P s 2 + t e ˜ ) mod ( ϕ ( x ) , P Q i ) c 2 P s 2 + t c 2 e ˜ + P Q i e s 2 + t Q i e e ˜ mod ( ϕ ( x ) , P Q i ) c 2 P s 2 + t ( c 2 e ˜ + Q i e e ˜ ) mod ( ϕ ( x ) , P Q i ) c 2 P s 2 + t e ˆ mod ( ϕ ( x ) , P Q i )

for e ˆ = c 2 e ˜ + Q i e e ˜ = c ˜ 2 e ˜ . Note that as e ˜ χ ρ , we have e ˆ = c ˜ 2 e ˜ Q i ( i + 1 ) ρ 2 2 . Furthermore, there exists ω R n such that

b ˆ + a ˆ s c 2 P s 2 + t e ˆ + ω P Q i mod ϕ ( x ) .

By Lemma 4.6, Step 4 returns the RNS representation of some c ˆ 0 R n , Q i and c ˆ 1 R n , Q i satisfying

c ˆ 0 = b ˆ P + t e ˆ 0 , c ˆ 1 = a ˆ P + t e ˆ 1

with e ˆ 0 k 2 and e ˆ 1 k 2 . Finally, Step 6 returns the RNS representation of ( a , b ) R n , Q i 2 , which satisfies

b + a s c 0 + c ˆ 0 + ( c 1 + c ˆ 1 ) s mod ( ϕ ( x ) , Q i ) c 0 + c 1 s + P 1 ( b ˆ + a ˆ s ) + t e ˆ 0 + t e ˆ 1 s mod ( ϕ ( x ) , Q i ) c 0 + c 1 s + P 1 ( c 2 P s 2 + t e ˆ + ω P Q i ) + t e ˆ 0 + t e ˆ 1 s mod ( ϕ ( x ) , Q i ) c 0 + c 1 s + c 2 s 2 + P 1 t e ˆ + ω Q i + t e ˆ 0 + t e ˆ 1 s mod ( ϕ ( x ) , Q i ) [ m 0 m 1 ] ϕ ( x ) , t + t e + P 1 t e ˆ + t e ˆ 0 + t e ˆ 1 s mod ( ϕ ( x ) , Q i ) [ m 0 m 1 ] ϕ ( x ) , t + t e * mod ( ϕ ( x ) , Q i )

for e * = e + P 1 e ˆ + e ˆ 0 + e ˆ 1 s . The bound on e * follows identically as in the proof of Lemma 4.7.□

Proof of Lemma 4.9

By assumption the input of Algorithm 23 is the RNS representation in basis of some ( c 0 , c 1 , c 2 ) R n , Q i 3 satisfying

c 0 + c 1 s + c 2 s 2 m 0 m 1 + e mod ( ϕ ( x ) , Q i ) .

for e E . By an identical argument to the proof of Lemma 4.7, the output of Algorithm 23 is the RNS representation in basis of ( a , b ) R n , Q i satisfying

b + a s m 0 m 1 + e *

with e * E + 1 8 δ R 2 k .□

References

[1] Gentry C. A fully homomorphic encryption scheme. PhD thesis. Stanford, CA: Stanford University; 2009. https://crypto.stanford.edu/craig/craig-thesis.pdf. Suche in Google Scholar

[2] Regev O. On lattices, learning with errors, random linear codes, and cryptography. J ACM. 2009;56(6):1–40. 10.1145/1568318.1568324Suche in Google Scholar

[3] Lyubashevsky V, Peikert C, Regev O. On ideal lattices and learning with errors over rings. In: Gilbert H, editor.Advances in Cryptology – EUROCRYPT 2010. Berlin, Heidelberg: Springer Berlin Heidelberg; 2010. p. 1–23. 10.1007/978-3-642-13190-5_1Suche in Google Scholar

[4] Brakerski Z, Gentry C, Vaikuntanathan V. Fully homomorphic encryption without bootstrapping. 2011. Cryptology ePrint Archive, Report 2011/277. https://ia.cr/2011/277. Suche in Google Scholar

[5] Brakerski Z, Gentry C, Vaikuntanathan V. (Leveled) fully homomorphic encryption without bootstrapping. ACM Trans Comput Theory. 2014;6(3):1–36. 10.1145/2633600Suche in Google Scholar

[6] Fan J, Vercauteren F. Somewhat practical fully homomorphic encryption. 2012. Cryptology ePrint Archive, Report 2012/144. https://ia.cr/2012/144. Suche in Google Scholar

[7] Cheon JH, Kim A, Kim M, Song Y. Homomorphic encryption for arithmetic of approximate numbers. In: Takagi T, Peyrin T, editors. Advances in Cryptology - ASIACRYPT 2017. Cham: Springer International Publishing; 2017. p. 409–37. 10.1007/978-3-319-70694-8_15Suche in Google Scholar

[8] Albrecht M, Chase M, Chen H, Ding J, Goldwasser S, Gorbunov S, et al. Homomorphic encryption standard. 2019. Cryptology ePrint Archive, Paper 2019/939. https://eprint.iacr.org/2019/939. Suche in Google Scholar

[9] Bossuat JP, Cammarota R, Chillotti I, Curtis BR, Dai W, Gong H et al. Security guidelines for implementing homomorphic encryption; 2024. IACR Commun Cryptol. 2025;1(4). 10.62056/anxra69p1.Suche in Google Scholar

[10] Chillotti I, Gama N, Georgieva M, Izabachène M. Faster fully homomorphic encryption: Bootstrapping in less than 0.1 seconds. In: Cheon JH, Takagi T, editors. Advances in cryptology - ASIACRYPT 2016. Berlin, Heidelberg: Springer Berlin Heidelberg; 2016. p. 3–33. 10.1007/978-3-662-53887-6_1Suche in Google Scholar

[11] Chillotti I, Gama N, Georgieva M, Izabachène M. Faster packed homomorphic operations and efficient circuit bootstrapping for TFHE. In: Takagi T, Peyrin T, editors. Advances in cryptology - ASIACRYPT 2017. Cham: Springer International Publishing; 2017. p. 377–408. 10.1007/978-3-319-70694-8_14Suche in Google Scholar

[12] Gao S. Efficient fully homomorphic encryption scheme. 2018. Cryptology ePrint Archive, Paper 2018/637. https://eprint.iacr.org/2018/637. Suche in Google Scholar

[13] Case BM, Gao S, Hu G, Xu Q. Fully homomorphic encryption with k-bit arithmetic operations. 2019. Cryptology ePrint Archive, Paper 2019/521. https://eprint.iacr.org/2019/521. Suche in Google Scholar

[14] Costache A, Smart NP. Which ring based somewhat homomorphic encryption scheme is best? In: Proceedings of the RSA Conference on Topics in Cryptology - CT-RSA 2016 - Volume 9610. Berlin, Heidelberg: Springer-Verlag; 2016. p. 325–40. 10.1007/978-3-319-29485-8_19Suche in Google Scholar

[15] Bos JW, Lauter K, Loftus J, Naehrig M. Improved security for a ring-based fully homomorphic encryption scheme. In: Stam M, editor. Cryptography and Coding. Berlin, Heidelberg: Springer Berlin Heidelberg; 2013. p. 45–64. 10.1007/978-3-642-45239-0_4Suche in Google Scholar

[16] Kim A, Polyakov Y, Zucca V. Revisiting homomorphic encryption schemes for finite fields. In: Tibouchi M, Wang H, editors. Advances in cryptology - ASIACRYPT 2021. Cham: Springer International Publishing; 2021. p. 608–39. 10.1007/978-3-030-92078-4_21Suche in Google Scholar

[17] Costache A, Curtis BR, Hales E, Murphy S, Ogilvie T, Player R. On the precision loss in approximate homomorphic encryption. In: Selected Areas in Cryptography - SAC 2023: 30th International Conference, Fredericton, Canada, August 14–18, 2023, Revised Selected Papers. Berlin, Heidelberg: Springer-Verlag; 2024. p. 325–45. 10.1007/978-3-031-53368-6_16Suche in Google Scholar

[18] Costache A, Laine K, Player R. Evaluating the effectiveness of heuristic worst-case noise analysis in FHE. In: Computer Security - ESORICS 2020: 25th European Symposium on Research in Computer Security, ESORICS 2020, Guildford, UK, September 14-18, 2020, Proceedings, Part II. Berlin, Heidelberg: Springer-Verlag; 2020. p. 546–65. 10.1007/978-3-030-59013-0_27Suche in Google Scholar

[19] Gentry C, Halevi S, Smart NP. Homomorphic evaluation of the AES circuit. In: Safavi-Naini R, Canetti R, editors.Advances in Cryptology – CRYPTO 2012. Berlin, Heidelberg: Springer Berlin Heidelberg; 2012. 850–67. 10.1007/978-3-642-32009-5_49Suche in Google Scholar

[20] Costache A, Nürnberger L, Player R. Optimisations and tradeoffs for HElib. In: Rosulek M, editor. Topics in cryptology - CT-RSA 2023. Cham: Springer International Publishing; 2023. p. 29–53. 10.1007/978-3-031-30872-7_2Suche in Google Scholar

[21] Cheon JH, Han K, Kim A, Kim M, Song Y. A full RNS variant of approximate homomorphic encryption. Selected areas in cryptography: annual international workshop. SAC proceedings SAC. 2018;11349:347–68. 10.1007/978-3-030-10970-7_16Suche in Google Scholar PubMed PubMed Central

[22] Halevi S, Polyakov Y, Shoup V. An improved RNS variant of the BFV homomorphic encryption scheme. In: Matsui M, editor. Topics in Cryptology - CT-RSA 2019. Cham: Springer International Publishing; 2019. p. 83–105. 10.1007/978-3-030-12612-4_5Suche in Google Scholar

[23] Bajard JC, Eynard J, Hasan MA, Zucca V. A full RNS variant of FV like somewhat homomorphic encryption schemes. In: Avanzi R, Heys H, editors. Selected Areas in Cryptography - SAC 2016. Cham: Springer International Publishing; 2017. p. 423–42. 10.1007/978-3-319-69453-5_23Suche in Google Scholar

[24] Albrecht MR, Player R, Scott S. On the concrete hardness of learning with errors. J Math Cryptol. 2015;9(3):169–203. 10.1515/jmc-2015-0016Suche in Google Scholar

[25] Lyubashevsky V, Peikert C, Regev O. A toolkit for ring-LWE cryptography. In: Johansson T, Nguyen PQ, editors. Advances in cryptology – EUROCRYPT 2013. Berlin, Heidelberg: Springer Berlin Heidelberg; 2013. p. 35–54. 10.1007/978-3-642-38348-9_3Suche in Google Scholar

[26] Albrecht MR. On dual lattice attacks against small-secret LWE and parameter choices in HElib and SEAL. In: Coron JS, Nielsen JB, editors. Advances in Cryptology - EUROCRYPT 2017. Cham: Springer International Publishing; 2017.p. 103–29. 10.1007/978-3-319-56614-6_4Suche in Google Scholar

[27] Case B. Homomorphic encryption and cryptanalysis of lattice cryptography. PhD Thesis. Clemson, SC: Clemson University; 2020. https://tigerprints.clemson.edu/all_dissertations/2635. Suche in Google Scholar

[28] Microsoft SEAL (release 4.1); 2023. Microsoft Research, Redmond, WA, https://github.com/Microsoft/SEAL. Suche in Google Scholar

[29] Yates K. Efficiency of homomorphic encryption schemes, MS Thesis. Clemson, SC: Clemson University; 2022. https://tigerprints.clemson.edu/all_theses/3868. Suche in Google Scholar

[30] Player R. Parameter selection in lattice-based cryptography. PhD thesis. Royal Holloway: University of London; 2018. https://pure.royalholloway.ac.uk/ws/portalfiles/portal/29983580/2018playerrphd.pdf. Suche in Google Scholar

[31] Al Badawi A, Bates J, Bergamaschi F, Cousins DB, Erabelli S, Genise N, et al. OpenFHE: open-source fully homomorphic encryption library. In: Proceedings of the 10th Workshop on Encrypted Computing & Applied Homomorphic Cryptography. WAHC’22. New York, NY, USA: Association for Computing Machinery; 2022. p. 53–63. 10.1145/3560827.3563379Suche in Google Scholar

[32] HElib homomorphic encryption library; 2013. https://github.com/homenc/HElib. Suche in Google Scholar

[33] Halevi S, Shoup V. Design and implementation of HElib: a homomorphic encryption library; 2020. Cryptology ePrint Archive, Paper 2020/1481. https://eprint.iacr.org/2020/1481. Suche in Google Scholar

[34] Han K, Ki D. Better bootstrapping for approximate homomorphic encryption. In: Jarecki S, editor. Topics in cryptology - CT-RSA 2020. Cham: Springer International Publishing; 2020. p. 364–90. 10.1007/978-3-030-40186-3_16Suche in Google Scholar

[35] Peikert C. A decade of lattice cryptography. Found Trends Theor Comput Sci. 2016 mar;10(4):283–424. 10.1561/0400000074Suche in Google Scholar

[36] Peikert C, Regev O, Stephens-Davidowitz N. Pseudorandomness of ring-LWE for any ring and modulus. In: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing. STOC 2017. New York, NY, USA: Association for Computing Machinery; 2017. p. 461–73. 10.1145/3055399.3055489Suche in Google Scholar

[37] Ducas L. Shortest vector from lattice sieving: a few dimensions for free. 2017. Cryptology ePrint Archive, Paper 2017/999. https://eprint.iacr.org/2017/999. Suche in Google Scholar

[38] Ajtai M. Generating hard instances of lattice problems (extended abstract). In: Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing. STOC ’96. New York, NY, USA: Association for Computing Machinery; 1996. p. 99–108. 10.1145/237814.237838Suche in Google Scholar

[39] Lindner R, Peikert C. Better key sizes (and attacks) for LWE-based encryption. In: Kiayias A, editor. Topics in Cryptology - CT-RSA 2011. Berlin, Heidelberg: Springer Berlin Heidelberg; 2011. p. 319–39. 10.1007/978-3-642-19074-2_21Suche in Google Scholar

[40] Lyubashevsky V, Micciancio D. On bounded distance decoding, unique shortest vectors, and the minimum distance problem. In: Halevi S, editor. Advances in Cryptology - CRYPTO 2009. Berlin, Heidelberg: Springer Berlin Heidelberg; 2009. p. 577–94. 10.1007/978-3-642-03356-8_34Suche in Google Scholar

[41] Lenstra AK, Lenstra HW, Lovász LM. Factoring polynomials with rational coefficients. Math Ann. 1982;261:515–34. 10.1007/BF01457454Suche in Google Scholar

[42] Blum A, Kalai A, Wasserman H. Noise-tolerant learning, the parity problem, and the statistical query model. J ACM. 2000 May;50:435–40. 10.1145/335305.335355Suche in Google Scholar

[43] Schnorr C, Euchner M. Lattice basis reduction: improved practical algorithms and solving subset sum problems. Math Program. 1994 Aug;66:181–99. 10.1007/BF01581144Suche in Google Scholar

[44] Chen Y. Réduction de réseau et sécurité concrète du chiffrement complètement homomorphe. PhD thesis. Paris Diderot University; 2013. Suche in Google Scholar

[45] van de Pol J, Smart NP. Estimating key sizes for high dimensional lattice-based systems. In: Stam M, editorsCryptography and coding. Berlin, Heidelberg: Springer Berlin Heidelberg; 2013. p. 290–303. 10.1007/978-3-642-45239-0_17Suche in Google Scholar

[46] Micciancio D, Regev O. In: Bernstein DJ, Buchmann J, Dahmen E, editorsLattice-based cryptography. Berlin, Heidelberg: Springer Berlin Heidelberg; 2009. p. 147–91. Suche in Google Scholar

[47] Becker A, Ducas L, Gama N, Laarhoven T. New directions in nearest neighbor searching with applications to lattice sieving. In: Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms. SODA ’16. USA: Society for Industrial and Applied Mathematics; 2016. p. 10–24. 10.1137/1.9781611974331.ch2Suche in Google Scholar

[48] Laarhoven T. Search problems in cryptography: from fingerprinting to lattice sieving. PhD thesis. Eindhoven University of Technology; 2016. Suche in Google Scholar

[49] Lepoint T, Naehrig M. A comparison of the homomorphic encryption schemes FV and YASHE. In: Pointcheval D, Vergnaud D, editors. Progress in Cryptology - AFRICACRYPT 2014. Cham: Springer International Publishing; 2014. p. 318–35. 10.1007/978-3-319-06734-6_20Suche in Google Scholar

Received: 2024-05-31
Revised: 2025-03-31
Accepted: 2025-05-21
Published Online: 2025-10-06

© 2025 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Heruntergeladen am 22.11.2025 von https://www.degruyterbrill.com/document/doi/10.1515/jmc-2024-0024/html
Button zum nach oben scrollen