Skip to main content
Article Open Access

On a mnemonic construction of permutations

  • EMAIL logo , and
Published/Copyright: February 9, 2017

Abstract

In the past when only “paper and pencil” ciphers were in use, a mnemonic Bellaso’s method was applied to obtain a permutation from a meaningful phrase. It turns out that even nowadays this method and its variations are employed and provide a simple but at the same time efficient tool for some ciphers. To the best of our knowledge the security of such ciphers has not been evaluated yet in all respects. Therefore, in this paper we initiate a study of their security with respect to the probability distribution of permutations obtained by Bellaso’s method.

MSC 2010: 05C99

1 Introduction

Random permutations constitute one of the most important and the most frequent cryptographic primitives. Thus, a considerable effort has been devoted to the design of their generators. Here we only mention an algorithm of R. A. Fischer and F. Yates, first published in 1938; see also [8].

In 1555, when only “paper and pencil” ciphers were available, Giovan Battista Bellaso [2, 3] introduced a method that enables one to remember easily a long numerical password; to a colloquial passphrase of length n a permutation on [n]={1,2,,n} is assigned. Formally, let {kn} be the set of all words of length n on Ak, an alphabet of k linearly ordered symbols, and let {n!} be the set of all permutations on the set [n]. Then Bellaso’s method can be described by a recursively defined mapping f:{kn}{n!}. For a word w and π=f(w), we have:

  1. Let s be the smallest symbol in w, and let the first, from the left, occurrence of s in w be in the i-th position of w. Then we mark this occurrence of s in w and set π(i)=1.

  2. Assume that already t1 symbols in w have been marked. Let s be the smallest unmarked symbol in w, whose first (unmarked) occurrence from the left is in the i-th position. Then we mark the symbol s in the i-th position and set π(i)=t+1.

As an example, we note that f assigns to a passphrase “gateway” the permutation 4153627. Moreover, there are other meaningful passphrases producing the same permutation: pathway, parkway, hardway, halfway, handsaw, halfman, mangoes, etc. (see also the end of Section 3.1).

Bellaso’s method was and has been widely in use. From the well-known classical ciphers we mention different variations of transposition ciphers, e.g., transposition tables where a secret permutation is required in the heading line.

There is a strong evidence that methods to generate a permutation by the Bellaso’s are still applied [9]. Recently, several modifications of Bellaso’s method have been considered. Rivest [12] presented and analyzed a method where passphrases play a central role. His main idea: Key is a “bag of words” agreed upon by sender and receiver, and there is a possibility to add/delete some words from this bag. Zajac and his students [14] studied “tweakable random permutations” for so-called Tweakable Block Ciphers [10]. Their goal was to create random, but grammatically and structurally correct sentences. These sentences do not need to be meaningful, but their structure is set in advance.

Further, the standard Shannon principle of designing a symmetric cipher suggests to interlace a confusion and diffusion layer several times, where diffusion is realized by a permutation. This permutation could be also given by a passphrase if there are enough rounds. Also, in many block or stream ciphers random permutations on {1,,16} might be used in principle as a part of the key, e.g., in DES, GOST, EDON80, etc. Such permutations can be given by a mnemonic passphrase.

In most applications of paper and pencil ciphers, the simplicity of usage of Bellaso’s method plays a key role. However, to be able to evaluate security of a cipher using Bellaso’s method it is necessary to know the probability distribution of obtained permutations. To our best knowledge, such study has not been carried out yet. We found in the literature only results on a distribution of special classes of permutations, see, e.g., [1], where Bard et al. analyzed the probabilities of random permutations having given cycle structures.

To initiate research in this direction first we will consider a theoretical model. In this model Bellaso’s method will be applied to the set of all words of a given length over an alphabet (not only to the meaningful phrases) assuming that the probability distribution on the set of all words (i.e., on all n-grams) is uniform. We find it quite surprising that Eulerian numbers play an essential role there. As far as we know this is the first occurrence of these famous numbers in cryptography.

In order to understand how good this theoretical model is, how close it mimics a real life application, we will measure the distance of the probability distribution of permutations in our model to the uniform distribution (the ideal one) and to an English language model. In this English language model, the probability distribution on the domain of all possible n-grams is taken from standardized corpora of English language. It turns out that for small values of parameters, there is a good consistency of the two models. We also analyze sensitivity of our model to its parameters.

2 Theoretical model

As mentioned in the introduction, the Bellaso’s method is still nowadays used in several cryptographic algorithms. Therefore, in order to be able to evaluate security of a cipher, it is necessary to know the probability distribution on the pertinent set of permutations in {n!}.

In this section we will investigate this distribution in a theoretical model where we assume the uniform distribution on the domain (= all words of length n on an alphabet Ak of k linearly ordered symbols). That is, for each permutation π it is necessary to find out the cardinality |f-1(π)|, and to determine the number of permutations π with the same value of |f-1(π)|, where f:{kn}{n!} is the function describing Bellaso’s method.

The matrices A and B given below contain pertinent values for n=5; the element aik equals the number of permutations π with |f-1(π)|=bi for the given value of k. For example, for k=7, there are a44=26 permutations π such that b4=|f-1(π)|=56, and other a64=26 permutations π with b6=|f-1(π)|=252.

A=[126662610000012666261000001266626100000126662610000012666261000001266626000000126660000000126],B=[162156126252462792].

As the size of domain is kn, we get the following formula as a dot product of the k=7 column of A and B, respectively:

(1)i=1aikbi=kn.

In what follows the elements of matrices A and B will be determined in general. We start by introducing a key notion of this section. It will be said that a permutation π contains a consecutive pair {j,j+1}, if the number j precedes j+1, that is, when π-1(j)<π-1(j+1). The next theorem provides all possible values of |f-1(π)| for a fixed number n. As usual, we set (ab):=0 for a<b.

Theorem 2.1

Let k be the size of an alphabet Ak. If π is a permutation on [n] with m consecutive pairs, then |f-1(π)|=(k+mn).

Proof.

Let π be a permutation with m consecutive pairs. We will prove that

(2)|f-1(π)|=i=0m(mi)(kn-m+i)=(k+mn).

First we describe a recursive construction of all words w=(w(1),,w(n)) that are mapped on π. Let i1,,in be numbers such that π(ij)=j for all j[n]. In the first step, w(i1) is set to equal any symbol in Ak. Let w(i1),,w(is) have already been constructed. Then w(is+1) has to be a symbol that is equal to or bigger than the symbols already used. It is permitted to set w(is+1)=w(is) only in the case when {s,s+1} forms a consecutive pair in π; that is, when is<is+1.

It follows from the above construction that a word w that is mapped on π has to contain at least n-m distinct symbols. If there are precisely n-m of them, then w is uniquely determined, and thus there are (m0)(kn-m) such words w in f-1(π). Indeed, it is necessary to choose the smallest symbol for w(i1), and for each consecutive pair {s,s+1} in π to set w(is+1)=w(is).

By the same token, there are (m1)(kn-m+1) words w in f-1(π) with n-m+1 distinct symbols. In their construction there is one “degree of freedom”. We choose one of m consecutive pairs {s,s+1} in π and assign to w(is+1) the next symbol after w(is) of n-m+1 symbols of w. Similarly, there are (mt)(kn-m+t) words in f-1(π) with exactly n-m+t symbols in each of them.

To finish the proof it suffices to show the right equality in (2). There are many proofs of this well-known identity. For example, it represents the number of ways how n balls can be chosen from a set of m white balls and k black balls. The proof is complete. ∎

Hence, the elements in A are closely related to the numbers of permutations on [n] with m consecutive pairs; we denote them by D(n,m). The proof of Corollary 2.2 explains the symmetry of columns of A.

Corollary 2.2

For each n and 0mn-1, one has

D(n,m)=D(n,n-1-m).

Proof.

It suffices to note that if a permutation π=(π(1),,π(n-1),π(n)) has m consecutive pairs, then the reverse permutation π=(π(n),π(n-1),,π(1)) has n-m-1 consecutive pairs. ∎

The next statement provides a closed formula for the number of permutation on [n] with m consecutive pairs.

Theorem 2.3

For each n>0, and 0m<n, one has

D(n,m)=i=0m(-1)i(n+1i)(m+1-i)n.

Proof.

By Theorem 2.1 and (1), bi=(k+in). Thus, (1) can be written as

(3)m=0n-1D(n,m)(k+in)=kn.

To facilitate notation we set D(n,m)=xm. Substituting into (3) gradually k=1,,n, and taking into account D(n,m)=D(n,n-1-m), i.e., xm=xn-1-m, we get the following system of equations:

(n0)x0=1n,
(n+11)x0+(n0)x1=2n,
(n+22)x0+(n+11)x1+(n0)x2=3n,
(2n-1n-1)x0+(2n-2n-2)x1++(n0)xn-1=nn.

As the matrix of the system is of rank n, this system has a unique solution. By substituting

xi=i=0m(-1)i(n+1i)(m+1-i)n

in the j-th equation of the system, we get

(n+j-1j-1)i=00(-1)i(n+10)(1-i)n+(n+j-2j-2)i=01(-1)i(n+1i)(2-i)n
++(n+j-tj-t)i=0t-1(-1)i(n+1i)(t-i)n++(n0)i=0j-1(-1)i(n+1i)(j-i)n=jn.

Rearranging terms leads to

i=0j-1(-1)i(n+1i)(n+j-1-ij-1-i)1n+i=0j-2(-1)i(n+1i)(n+j-2-ij-2-i)2n
++i=0j-t(-1)i(n+1i)(n+j-t-ij-t-i)tn++i=00(-1)i(n+1i)(n0)jn=jn.

However, by Hagen’s identity [7]

i=0k(-1)i(yi)(x-ik-i)=(x-yk),

we have, for each 1tj,

i=0j-t(-1)i(n+1i)(n+j-t-ij-t-i)=(j-t-1j-t)=0.

The proof follows. ∎

It turns out that D(n,m) are in fact very well-known numbers. In 1755, Leonard Euler [5] investigated polynomials whose coefficients E(n,m), 0m<n, are nowadays called Eulerian numbers. These numbers are found in many different areas of mathematics, cf. [11]. In combinatorics, E(n,m) represents the number of permutations π=(π1,,πn) with m ascents, i.e., with m indices i such that πi<πi+1.

Corollary 2.4

D(n,m) are Eulerian numbers.

This statement follows from [13], where it is shown that E(n,m)=i=0m(-1)i(n+1i)(m+1-i)n.

The next corollary provides a direct proof of the identity D(n,m)=E(n,m).

Corollary 2.5

The number of permutations of length n with m ascents equals the number of permutations with m consecutive pairs.

Proof.

Let π=(π1,π2,,πn) be a permutation on [n] and π-1 be its inverse. Then {k=πi,k+1=πj} is an increasing pair in π (i.e., i<j) if and only if π-1 has an ascent at k. Thus, if π has s ascends and t increasing pairs, then π-1 has s increasing pairs and t ascends. This in turn implies that a mapping T on {n!}:ππ-1 provides a bijection between permutations with m ascends and permutations with m consecutive pairs. ∎

We illustrate the above proof by an example. The permutation π=(2,4,,2k,1,3,,2k-1) has 2k-2 ascents and k-1 increasing pairs, and π-1=(k+1,1,k+2,2,k+3,3,,2k-2,k-1,2k-1,k) has k-1 ascents and 2k-2 increasing pairs.

3 Statistical analysis of the theoretical model

In this section we will study how the probability distribution on {n!} in our theoretical model differs from the probability distribution in the English language model, and how these two distributions differ from the uniform distribution on {n!}. It is common to use the Manhattan metric dM (a.k.a. variation distance, zigzag metric, l1 metric) [6] to measure the statistical distance of two discrete distributions over the same sample space. For two probability distributions P and Q of the set of all permutations πi on [n] with probabilities pi and qi we have

dM(P,Q)=i=1n!|pi-qi|.

Obviously, d(P,Q)2 with equality if and only if piqi=0 for all i.

As for the probability p of a permutation π, it depends on the probability distribution on the domain. It is

(4)p=prob(n-gram),

where the sum runs over all n-grams in f-1(π). We set p=0 if |f-1(π)|=0.

In the theoretical model introduced in the previous section, the uniform distribution is assumed on the domain {kn}. It has been shown there that in this case we have |f-1(π)|=(k+mn) where m is the number of consecutive pairs of π. Hence, by (4), for the probability p of π in our theoretical model,

p=1kn(k+mn).

3.1 Consistency of the theoretical model with the English language model

First we focus on to what extent the English language model is consistent with the theoretical one. In the theoretical model and in the English language model we denote the probability distribution of permutations by P and Q, and the probability of a permutation πi in P and Q will be denoted by pi and qi, respectively.

For our English language model we selected texts of different lengths from the Open American National Corpus (AONC, www.anc.org/data/oanc/download/). We have chosen texts of size {1-5,10-50}×106 letters. As for the distribution on the domain (unlike the theoretical model, where the uniform distribution is considered), we have calculated probabilities of n-grams as follows: In each corpus, we set a window of length n and slid it over the whole corpus; to get the probability (= the relative frequency) of an n-gram the obtained frequency has been divided by the total number of n-grams in the corpus, i.e., by T-n+1, where T is the total length of the corpus. The probability qi is then calculated by (4).

The distance of the two distributions Pand Q was evaluated for all n from 1 to 10; in all cases, the parameter k was set to be 26.

The obtained results tend to indicate that the statistical distance increases with increasing n and decreases with the increasing corpus size. Figure 1 shows the statistical distance of the English language model to the theoretical one for n=1,2,,10 and corpora of size c×106 for various values of c.

It is clear that the distance of the two models depends also on the total number of n-grams occurring in the given corpus; i.e., on the number of n-grams with a non-zero probability. As expected, our experiments show that the ratio rn of all n-grams with a non-zero probability in the English language model to all possible n-grams is rapidly decreasing with growing n. Figures 2 and 3 depict this ratio for the corpus size c×106, c=1,10,20,30,40,50. As the ratio becomes very small for n6, Figure 3 is in a logarithmic scale.

Figure 1 The statistical distance of the English language model to the theoretical one
for n=1,2,…,10$n=1,2,\ldots,10$and corpora of size c×106$c\times 10^{6}$ for various values of c.
Figure 1

The statistical distance of the English language model to the theoretical one for n=1,2,,10and corpora of size c×106 for various values of c.

Figure 2 Ratio rn$r_{n}$ of all n-grams with a non-zero probability in the English language model to all possible n-grams.
Figure 2

Ratio rn of all n-grams with a non-zero probability in the English language model to all possible n-grams.

Figure 3 Ratio rn$r_{n}$ of all n-grams with a non-zero probability in the English language model to all possible n-gramsin a logarithmic scale.
Figure 3

Ratio rn of all n-grams with a non-zero probability in the English language model to all possible n-gramsin a logarithmic scale.

At the end of this subsection we point out that although the ratio rn decreases dramatically with growing n, the Bellaso’s mapping is “nearly onto”; i.e., the size of the range of f in the English language model is nearly n!. In greater detail, in corpora of all considered sizes c×106, and n7, the mapping f is onto. Even for n=8 and c=1 there are only 13 permutations without preimages, and for c=2 there is only one such permutation. In all other cases we have multiple preimages for all permutations.

For n=9 the number of permutations without a preimage gradually decreases from 79 734 permutations for c=1 to 7 for the largest c=50, i.e. less than 0.002% in this case.

For n=10 the number of permutations without a preimage gradually decreases from 2 954 193 permutations in the case of c=1 to 206 797 for the largest corpus, i.e. less than 5.7%.

3.2 Comparing the distributions P and Q to the uniform one

This subsection is devoted to comparing the probability distributions in our theoretical model and the English language model to the uniform probability distribution R. By Corollary 2.5, we get

dM(P,R)=m=0n-1E(n,m)|(k+mn)kn-1n!|.

In Table 1 we present for each permutation π of length n=4 the number m of its consecutive pairs, and the probability p of π for the given size k of alphabet. This way, individual columns of the table provide an information how uniformly the probabilities of individual permutations are spread. In the last row, the statistical distance of the distribution in the theoretical model to the uniform one is calculated.

Table 1

Number of consecutive pairs m, the probability p of π for the given size k of alphabet and the distance to the uniform distribution.

πmk=1k=2k=3k=4k=5k=6k=7k=8k=9k=10k=26
[1,2,3,4]310.31250.185190.136720.1120.097220.087460.080570.075450.07150.05197
[1,2,4,3]200.06250.061730.058590.0560.054010.052480.051270.05030.04950.04481
[1,3,2,4]200.06250.061730.058590.0560.054010.052480.051270.05030.04950.04481
[1,3,4,2]200.06250.061730.058590.0560.054010.052480.051270.05030.04950.04481
[1,4,2,3]200.06250.061730.058590.0560.054010.052480.051270.05030.04950.04481
[2,1,3,4]200.06250.061730.058590.0560.054010.052480.051270.05030.04950.04481
[2,3,1,4]200.06250.061730.058590.0560.054010.052480.051270.05030.04950.04481
[2,3,4,1]200.06250.061730.058590.0560.054010.052480.051270.05030.04950.04481
[3,1,2,4]200.06250.061730.058590.0560.054010.052480.051270.05030.04950.04481
[3,1,4,2]200.06250.061730.058590.0560.054010.052480.051270.05030.04950.04481
[3,4,1,2]200.06250.061730.058590.0560.054010.052480.051270.05030.04950.04481
[4,1,2,3]200.06250.061730.058590.0560.054010.052480.051270.05030.04950.04481
[1,4,3,2]1000.012350.019530.0240.027010.029150.030760.032010.0330.0384
[2,1,4,3]1000.012350.019530.0240.027010.029150.030760.032010.0330.0384
[2,4,1,3]1000.012350.019530.0240.027010.029150.030760.032010.0330.0384
[2,4,3,1]1000.012350.019530.0240.027010.029150.030760.032010.0330.0384
[3,2,1,4]1000.012350.019530.0240.027010.029150.030760.032010.0330.0384
[3,2,4,1]1000.012350.019530.0240.027010.029150.030760.032010.0330.0384
[3,4,2,1]1000.012350.019530.0240.027010.029150.030760.032010.0330.0384
[4,1,3,2]1000.012350.019530.0240.027010.029150.030760.032010.0330.0384
[4,2,1,3]1000.012350.019530.0240.027010.029150.030760.032010.0330.0384
[4,2,3,1]1000.012350.019530.0240.027010.029150.030760.032010.0330.0384
[4,3,1,2]1000.012350.019530.0240.027010.029150.030760.032010.0330.0384
[4,3,2,1]00000.003910.0080.011570.014580.017090.01920.0210.03272
d(P,R)1.9166710.72840.56250.4560.382720.329450.289060.257430.2320.08967

Table 2 contains the distance d(n) of probability distribution in the theoretical model and the uniform distribution for several values of n and k=26.

Table 2

The distance d(n) between theoretical model and the uniform distribution for several values of n and k=26.

n345678910
d(n)0.039447730.089667730.093730960.152660430.162536770.224889470.243734860.30403801

Tables 3 and 4 contain the same information for the English model as Tables 1 and 2 do for the theoretical one. In this case the value of k was fixed to 26 but the different sizes of corpora have been considered. We point out that while the probabilities of individual permutations with the same number of consecutive pairs are equal in Table 1 they are different in Table 4. The reason is that the probability distribution of n-grams in the domain of the English model is not the uniform one.

Table 3

The distance d(n) between English language model and the uniform distribution for several values of n,c and k=26.

c=1c=2c=3c=4c=5c=10c=20c=30c=40c=50
d(3)0.0360.0350.0380.0370.0360.0370.040.0470.0420.041
d(4)0.0750.0780.0770.0770.0770.0770.0770.0870.090.093
d(5)0.1280.130.130.130.1290.1280.1270.1270.1260.125
d(6)0.1960.1930.1930.1920.1910.1870.1860.1840.1810.181
d(7)0.2810.2740.270.2670.2650.2590.2550.2530.250.25
d(8)0.4210.3950.3820.3740.3680.3530.3410.3360.3320.331
d(9)0.740.640.60.5720.5540.5090.4750.460.4520.449
d(10)1.6281.3941.2191.0830.9760.860.7480.6990.6770.665
Table 4

The probability p of π for n=4 and size k=26 of alphabet, and the distance to uniform distribution in the case of English language model.

πc=1c=2c=3c=4c=5c=10c=20c=30c=40c=50
[1,2,3,4]0.045510.045690.046280.046390.046560.04680.047290.050290.051290.05154
[1,2,4,3]0.042810.042570.04260.042780.042860.043080.042940.043650.043860.04386
[1,3,2,4]0.043790.043510.043560.043460.043280.043440.042970.044180.044810.04527
[1,3,4,2]0.043180.043620.043130.042790.043090.043320.043340.043380.043540.04377
[1,4,2,3]0.044160.043760.043920.044090.044190.043640.043510.043840.044310.04456
[2,1,3,4]0.046230.046520.046020.045830.045620.045430.044880.046020.046020.04573
[2,3,1,4]0.047270.047750.047260.047420.047450.047510.047470.046580.046620.04669
[2,3,4,1]0.042160.042280.042840.043240.043740.04390.044620.044830.044840.04496
[3,1,2,4]0.043340.043810.043770.043720.043450.043840.043920.044840.045190.04527
[3,1,4,2]0.049040.049020.048910.048920.048750.048240.048260.046990.046570.04645
[3,4,1,2]0.041880.042340.042480.042670.042720.043050.043960.043810.043490.04361
[4,1,2,3]0.042620.04260.043230.043320.043520.04390.044620.04490.044460.04456
[1,4,3,2]0.039340.039440.039370.039370.03930.038780.038030.037780.037940.0379
[2,1,4,3]0.044280.044480.043910.043640.04340.043040.042070.040810.04050.04004
[2,4,1,3]0.040970.040680.040540.040540.040540.040610.040280.039630.039010.03906
[2,4,3,1]0.03770.037470.037390.037230.037220.037130.036920.037670.038020.0379
[3,2,1,4]0.044580.044320.043780.043670.043370.042790.042010.040880.040760.04032
[3,2,4,1]0.037460.037710.037780.037920.038130.03840.038440.038190.038120.03801
[3,4,2,1]0.037390.036930.036960.036540.036530.036540.037070.036360.036340.03658
[4,1,3,2]0.039120.039150.039290.039550.039350.039440.039380.03830.037550.0375
[4,2,1,3]0.041230.04110.040940.041060.041160.040790.040890.040150.03990.03964
[4,2,3,1]0.036120.036090.036470.036730.036670.036790.037040.037110.037050.03715
[4,3,1,2]0.037260.03740.037470.037330.037440.037730.038090.037840.037740.03767
[4,3,2,1]0.032540.031730.032110.031780.031640.031820.032020.031970.032060.03195
d(Q,R)0.0750660.0779220.0767020.0772300.077360.0772870.0770450.0866300.0900280.092582

4 Conclusions

In this paper we have initiated a study of the security of ciphers employing Bellaso’s method with respect to the probability distribution of permutations generated in this way. As usual we have assumed that the uniform probability distribution of permutations is the ideal one. The intuition suggests that for n large, Bellaso’s mnemonic method does not provide a distribution on {n!} that is close to the uniform one. However, for small n up to 8, Tables 2 and 4 demonstrate that the probability distribution of permutations both in the theoretical model and the English model are close to the ideal one. To get a better probability distribution even for larger values of n it suffices to reduce the range of the mapping f in a suitable way and, additionally, also to increase k, the size of the alphabet. In [4], it is shown that for n, Eulerian numbers E(n,m) can be approximated by a Gaussian distribution with the mean μ=n+12 and σ2=n+112. Thus confining the range of f to permutations with m consecutive pairs, m[n+12-δ,n+12+δ] for suitably small δ, is likely the best way how to modify it. For n=16, a frequent length of a part of the key (cf. the introduction) and setting m[6,9], the range of f comprises 92% of all permutations and the domain contains 82% of all 16-grams for k=26 (90% for k=52). The distance of the probability distribution to the uniform one in the case k=26 is 0.486 but it decreases to 0.2531 for k=52. Limiting m to 7,8 still leads to 61% of all permutations, and a half of all passphrases for k=26 (58% for k=52) while the probability distribution of the range of f is now the uniform one.


Communicated by Spyros Magliveras


Acknowledgements

We are indebted to Noga Alon for inspiring discussions on the topic of the paper and for the proof of Corollary 2.5. We are also grateful to anonymous reviewers for helpful comments and remarks.

References

[1] Bard G. V., Ault S. V. and Courtois N. T., Statistics of random permutations and the cryptanalysis of periodic block ciphers, Cryptologia 36 (2012), 240–262. 10.1080/01611194.2011.632806Search in Google Scholar

[2] Bauer F. L., Decrypted Secrets. Methods and Maxims of Cryptology, Springer, Berlin, 2007. Search in Google Scholar

[3] Bellaso G. B., Novi et singolari modi di cifrare de l’eccellente dottore di legge Messer Giouan Battista Bellaso nobile bresciano, Lodovico Britannico, Brescia, 1555. Search in Google Scholar

[4] Carlitz L., Kurtz D. C., Scoville R. and Stackelberg O. P., Asymptotic properties of Eulerian numbers, Z. Wahrscheinlichkeitstheor. Verw. Geb. 23 (1972), 47–54. 10.1007/BF00536689Search in Google Scholar

[5] Eulerus L., Institutiones calculi differentialis cum eius usu in analysi finitorum ac doctrina serierum, Academia Scientiarum Imperialis Petropolitanae, Sankt Petersburg, 1755. Search in Google Scholar

[6] Goldreich O., Foundations of Cryptography: Basic Tools, Cambridge University Press, Cambridge, 2001. 10.1017/CBO9780511546891Search in Google Scholar

[7] Hagen J. G., Theorie der Combinationen, Synopsis der höheren Mathematik. Erster Band. Arithmetische und algebrische Analysis, Felix L. Dames, Berlin (1891), 55–68. Search in Google Scholar

[8] Knuth D. E., The Art of Computer Programming. Vol. 2., Addison-Wesley, Boston, 2006. 10.1145/1283920.1283929Search in Google Scholar

[9] Kollar J., Soviet vic cipher: No respector of Kerckhoff’s principles, Cryptologia 40 (2016), no. 1, 33–48. 10.1080/01611194.2015.1028679Search in Google Scholar

[10] Liskov M., Rivest R. L. and Wagner D., Tweakable block ciphers, J. Cryptology 24 (2011), 588–613. 10.1007/3-540-45708-9_3Search in Google Scholar

[11] Petersen T. K., Eulerian Numbers, Birkhäuser, Basel, 2015. 10.1007/978-1-4939-3091-3Search in Google Scholar

[12] Rivest R. L., Symmetric encryption via keyrings and ECC, preprint 2016, http://arcticcrypt.b.uib.no/files/2016/07/Slides-Rivest.pdf. Search in Google Scholar

[13] Worpitzky J., Studien über die Bernoullischen und Eulerschen Zahlen, J. Reine Angew. Math. 94 (1883), 203–232. 10.1515/crll.1883.94.203Search in Google Scholar

[14] Zajac P., Passphrase generator and typing, 2016, http://hanzo.sk/TP/. Search in Google Scholar

Received: 2016-10-13
Revised: 2017-1-24
Accepted: 2017-1-25
Published Online: 2017-2-9
Published in Print: 2017-3-1

© 2017 by De Gruyter

This article is distributed under the terms of the Creative Commons Attribution Non-Commercial License, which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Downloaded on 24.4.2026 from https://www.degruyterbrill.com/document/doi/10.1515/jmc-2016-0058/html?lang=en
Scroll to top button