Startseite A block Newton’s method for computing invariant pairs of nonlinear matrix pencils
Artikel Öffentlich zugänglich

A block Newton’s method for computing invariant pairs of nonlinear matrix pencils

  • Kirill V. Demyanko und Yuri M. Nechepurenko EMAIL logo
Veröffentlicht/Copyright: 21. Februar 2018

Abstract

The block inverse iteration and block Newton’s methods proposed by the authors of this paper for computing invariant pairs of regular linear matrix pencils are generalized to the case of regular nonlinear matrix pencils. Numerical properties of the proposed methods are demonstrated with a typical quadratic eigenproblem.

MSC 2010: 65F15; 65F10; 65F08

1 Introduction

A block Newton method has been proposed in [7] for ordinary partial linear eigenvalue problems. The theory of its convergence was constructed in terms of the integral performance criteria for dichotomy. This method occurred to be quite efficient and has got further development by various authors (see, e.g., [3, 4, 9]). In particular, a variant of this method was proposed in [4] for computing the invariant pairs of regular linear matrix pencils of general form. An efficient variant of the inverse subspace iteration with tuning was proposed for computing the initial guess. It was proposed in [5] to use methods from [4] for computing the minimal invariant pairs of regular nonlinear matrix pencils in the framework of the method of successive linear problems supplemented with the deflation procedure from [10]. Though the proposed combination of four algorithms to be quite efficient, the question about a direct generalization of the algorithms from [4] to the case of nonlinear matrix pencils has remained open. It would allow one to decrease essentially the logical complexity and the number of parameters comparing to the method from [5]. The aim of this paper is to propose and justify such generalization.

Recall that nonlinear matrix pencils in the matrix analysis usually mean pencils of the form

T(λ)=i=1dTifi(λ)(1.1)

where T1, …, Td are square complex matrices of order n and f1, …, fd are scalar functions of a complex variable analytic in some domain Ω of the complex plane (see, e.g., [10]). Pencil (1.1) is called regular if there exists λ0Ω such that det (T(λ0)) ≠ 0. The set of eigenvalues of a regular pencil of form (1.1), i.e., the set of solutions to the equation det (T(λ)) = 0 is not more than countable. If λ is one of those roots, then nonzero vectors belonging to the kernel of the matrix T(λ) are called eigenvectors of pencil (1.1) corresponding to the eigenvalue λ.

The pair (X, Λ), where X ∈ ℂn×p, Λ ∈ ℂp×p, is called the minimal invariant pair of pencil (1.1) if, first,

T(X,Λ)=i=1dTiXfi(Λ)=0

where

f(Λ)=12πiγf(z)(zIΛ)1dz

and γ is a sufficiently smooth simple positively oriented closed contour in Ω enveloping all eigenvalues of the matrix Λ; second, there exists a positive integer number l such that the matrix

XT,(XΛ)T,,(XΛl1)T

has full rank. The smallest l satisfying this condition is called the minimality index of the pair (X, Λ). In particular, if l = 1, then the matrix X has full rank. If (X, Λ) is a minimal invariant pair of regular pencil (1.1), then the spectrum of Λ is a subset of the set of finite eigenvalues of this pencil and the eigenvectors u of the matrix Λ and the eigenvectors x of pencil (1.1) are connected in the following way: x = Xu.

In this paper we consider the problem of computation of the minimal invariant pair of index l = 1 corresponding to finite eigenvalues of pencil (1.1) which are the closest ones to a given point λ0. Applying the change of variables λλ + λ0 and introducing new notations, we can reduce this problem to computing the minimal invariant pair of index 1 corresponding to minimal in magnitude finite eigenvalues of the pencil

A(λ)=A0+λA1+λ2N(λ).(1.2)

Here

A0=T(λ0),A1=T(λ0),N(λ)=i=1dTigi(λ)(1.3)

where gi(λ)=fi(λ+λ0)fi(λ0)fi(λ0)(λ+λ0). Without loss of generality, we assume that λ0 is not an eigenvalue of pencil (1.1) and, hence, the matrix A0 is nonsingular. Sections 2 and 3 describe the proposed variant of the inverse subspace iteration and block Newton’s method, respectively. Section 4 presents the combination of these two methods where the first method is used to obtain a sufficiently good initial guess and the second one is applied to its fast refinement. Section 5 demonstrates the numerical properties of these methods on the example of a typical square eigenvalue problem appearing in the study of spatial stability of hydrodynamic flows.

Describing the proposed algorithms, we use an additional procedure for orthonormalization of the columns of a given full-rank matrix W ∈ ℂn×p by computing its QR-decomposition [8], i.e., W = QR, where Q ∈ ℂn×p is a unitary rectangular matrix, QQ = I, and R ∈ ℂp×p is an upper triangular one. Hereinafter ∗ denotes the symbol of conjugate transposition and I is the identity matrix of the corresponding order. We write down the result of this procedure as (Q, R) = ort (W), or Q = ort (W) when the matrix R is not needed. In addition, ‖B2 denotes below the second norm of a vector or matrix B.

We normalize the matrix X of the required minimal invariant pair so that its columns were A0A0-orthonormalized, i.e., YY = I, where Y = A0X.

2 Inverse subspace iteration

In this section we describe the proposed variant of the inverse subspace iteration method for computing the minimal invariant pair (X, Λ) of index 1 corresponding to the p smallest in magnitude finite eigenvalues of regular nonlinear matrix pencil (1.2). Up to normalization, this method is a direct generalization of the variant of inverse subspace iteration which is proposed in [4] for the linear matrix pencils.

Algorithm 1

Given ε > 0, X0 ∈ ℂn×p with X0A0A0X0=1, and a square nonzero matrix Λ0 of order p.

For k = 1, 2, …

  1. Solve for Xk the system

    A0Xk=A1Xk1N(Xk1,Λk1)Λk1.(2.1)
  2. Compute (Yk, Gk) = ort(A0Xk). Set Xk:=XkGk1, A~1=YkA1Xk, T~i=YkTiXk, i = 1, …, d.

  3. Solve for Λk the matrix equation

    I+A~1Λk+N~(Λk)Λk2=0(2.2)

    where

    N~(λ)=i=1dT~igi(λ).
  4. Compute the residual

    Rk=Yk+A1XkΛk+N(Xk,Λk)Λk2(2.3)

Test the convergence: if ‖Rk2ε, then set Xout = Xk, Λout = Λk, Yout = Yk, Rout = Rk and stop.

At the first step of Algorithm 1 we have to solve block system of linear equations (2.1). If the order n of the matrix A0 is not too large, then this system may be solved on the basis of LU-decomposition [8] of the matrix A0, i.e.,

A0=DΠL1LUΠU1(2.4)

where ΠL and ΠU are the permutation matrices, D is a left diagonal scaling matrix, and L and U are the lower and upper triangular matrices, respectively. If A0 is of large order, then this system can be solved, for example, by GMRES [11] using the right preconditioning based on the incomplete LU-decomposition of the matrix A0 and tuning to accelerate the convergence as it was proposed in [4].

At the third step of Algorithm 1 we have to solve for Λk matrix equation (2.2). To do that, we use the following iterative algorithm.

Algorithm 2

Given ε~>0 and nonzero matrix Λ~0 of order p.

For j = 0, 1, …

  1. Compute the residual R~j=A~0+A~1Λ~j+N~(Λ~j)Λ~j2.

    Test the convergence: if R~j2ε~, then set Λk=Λ~j and stop.

  2. Compute Λ~j+1=Λ~jA~11R~j.

If we have no a priori information concerning the matrix X0 which has to be specified in the initialization of Algorithm 1, then X0 can be taken as a random rectangular matrix with A0A0-orthonormalized columns.

It should be noted that in the case N(λ) ≡ 0 (i.e., when considered matrix pencil (1.2) is linear) Algorithm 1 coincides with the variant of inverse subspace iteration proposed in [4] up to normalization of the matrices Xk, In this case the matrix Λk is computed exactly in one step of Algorithm 2.

3 Block Newton’s method

A block Newton method was also proposed in [4] for computing the invariant pair corresponding to the smallest in magnitude finite eigenvalues of a regular linear matrix pencil. In this section we generalize this method to nonlinear matrix pencils for computing the minimal invariant pair of index 1 corresponding to minimal in magnitude finite eigenvalues of pencil (1.2).

Let (Xk, Λk) be an approximate minimal invariant pair and Xk ∈ ℂn×p have A0A0-orthonormalized columns and the matrix XkA0A1Xk be nonsingular. Concerning the matrix Λk, we assume that it is a solution to equation (2.2).

Note that the left-hand side of equation (2.2) is the projection of residual (2.3) associated with the approximate minimal invariant pair (Xk, Λk) onto the span of columns of the matrix Yk.

Under these assumptions, the kth iteration of the described method consists in the following. First we compute residual (2.3). Then we solve for Φk the generalized Sylvester equation

A0Φk+(IYkYk)A1ΦkΛk=Rk(3.1)

and apply the Newton step

Xk+1=XkΦk

.

After that we normalize the obtained matrix Xk+1 and calculate the corresponding matrices Yk+1, Λk+1, and Rk+1 in a similar way to Steps 2–4 of Algorithm 1.

Since YkRk=0, equation (3.1) implies YkA0Φk=0. Therefore, equality (3.1) can be rewritten in the form

(IYkYk)A0Φk+A1ΦkΛk=(IYkYk)Rk,ZkΦk=0(3.2)

where Zk=ort(A0Yk).

In the case N(λ) ≡ 0 the algorithm described above coincides with the block Newton method proposed in [4] for linear pencils. Moreover, the generalized Sylvester equation has the same form (3.2) in the cases of linear or nonlinear pencil (1.2). This allows us to use the following algorithm proposed in [4] for solving equation (3.2).

Let us compute the Schur decomposition [8]:

Λk=QkCkQk(3.3)

where Qk is a unitary matrix and Ck is an upper triangular one and make the change of variables Φk := ΦkQk and Rk := RkQk. By Φkj and Rkj we denote the jth columns of the matrices Φk and Rk, respectively, and by cij(k) we denote the ijth entry of matrix Ck. In this case system (3.2) can be rewritten in the form

(IYkYk)A0+cjj(k)A1Φkj=Ωkj,ZkΦkj=0,j=1,2,,p(3.4)

where

Ωk1=(IYkYk)Rk1,Ωkj=(IYkYk)Rkji=1j1cij(k)A1Φki,j>1.

We solve each system in (3.4) with the use of the right preconditioning, i.e.,

HkΓkj=Ωkj,Φkj=LkΓkj(3.5)

where

Hk=(IYkYk)A0+cjj(k)A1Lk(3.6)

and Lk is the preconditioning matrix. Applying GMRES to preconditioned system (3.5), we obtain the approximate solution Γ^kj satisfying the inequality

ΩkjHkΓ^kj2δRk2(3.7)

where δ is a given tolerance. The second equalities in (3.4) and (3.5) show that the matrix Lk must satisfy the equality Lk=(IZkZk)Lk. On the other hand, at each step of GMRES, the approximate solution belongs to Krylov subspace generated by the matrix Hk, which satisfies the equality Hk=(IYkYk)Hk, and the initial residual r0=ΩkjHkΓ^kj0, which satisfies the equality r0=(IYkYk)r0. This means that the approximate solution belongs to the subspace (IYkYk)Cn and, therefore, the result does not change (in the exact arithmetic) if we use the matrix Lk(IYkYk) instead of Lk. Thus, the most general form of the preconditioning matrix Lk is

Lk=(IZkZk)L~k(IYkYk)(3.8)

where k is a square matrix of order n, and the problem of choice of the preconditioning matrix Lk is reduced to the choice of the matrix k. In numerical experiments presented in Section 5, for the matrix k we take the matrix A01 and compute the products by this matrix on the base of exact LU-decomposition (2.4) of the matrix A0. If the matrix A0 is extremely large and sparse, then we may use the incomplete LU-decomposition instead of the complete one.

4 Resulting algorithm

Combining the algorithms described in Sections 2 and 3, we get a method for computing the minimal invariant pair of index 1 corresponding to the smallest in magnitude finite eigenvalues of pencil (1.2).

Starting from an arbitrary matrix X0 having A0A0-orthonormalized columns and from a zero matrix Λ0, we apply several steps of Algorithm 1 and set X0 = Xout, Λ0 = Λout, Y0 = Yout, R0 = Rout. After that we apply the block Newton method described in Section 3. It can be formally written in this case as the following algorithm.

Algorithm 3

Given ε > 0, δ > 0, and the matrices X0, Λ0, Y0, R0, computed by Algorithm 1.

For k = 0, 1, …

  1. Compute the Schur decomposition (3.3).

  2. Set Xk := XkQk, Yk := YkQk, Rk := RkQk.

  3. Compute Zk=ort(A0Yk).

  4. Solve for Φk system (3.4) by the method described in Section 3.

  5. Compute Xk+1 = XkΦk.

  6. Given Xk+1, compute the matrices Yk+1, Λk+1 and the residual Rk+1 in a similar way to Steps 2–4 of Algorithm 1.

  7. Test the convergence: if ‖Rk+12ε, then set Xout = Xk+1, Λout = Λk+1 and stop.

To minimize the number of iterations of Algorithm 3, the Schur decomposition computed at Step 1 of Algorithm 3 needs to have the diagonal entries of Ck ordered in a non-increasing order of magnitude (see [3], p. 597).

If generalized Sylvester equation (3.4) is solved according to (3.5), (3.6), (3.7), and (3.8) with the tolerance δ in (3.7) being small enough, then Algorithm 3 demonstrates the quadratic convergence provided that the approximate invariant pair obtained by a few iterations of Algorithm 1 is sufficiently close to the exact one. As δ increases, the solution of the Sylvester equation becomes less accurate and Algorithm 3 will exhibit only linear convergence.

5 Numerical experiments

To illustrate the use of Algorithms 1 and 3, we took a typical quadratic eigenvalue problem appearing in the analysis of spatial stability of hydrodynamic flows. Consider a steady flow of viscous incompressible fluid in the three-dimensional infinite duct {−∞ < x < ∞} × Σ of a constant square cross-section

Σ={(y,z):1<y<1,1<z<1}

with non-slipping boundary conditions on the duct walls and the constant pressure gradient (−τ, 0, 0), where τ is a positive constant. This flow is referred to as the Poiseuille’s flow [6] and further it will be assumed as the main flow. The normalized velocity vector of the main flow has the form (U(y, z), 0, 0), where the component U(y, z) of the velocity in the streamwise direction x is nonnegative and reaches its maximum value of 1 at the midpoint of the cross-section. The normalized velocity profile U(y, z) can be obtained by solving the Poisson equation

2U~y2+2U~z2=1

in the domain Σ with Dirichlet boundary conditions and taking U(y, z) = Ũ(y, z)/Ũ(0, 0).

The analysis of spatial stability of the Poiseuille flow consists in the study of its stability to infinitesimal disturbances of the form

[v(x,y,z,t)p(x,y,z,t)]=[v(y,z)p(y,z)]eλxiωt(5.1)

where v and p do not depend on x and t, and λ, ω are some numbers. Substituting (5.1) into linearized disturbance equations (see, e.g., [6]), we obtain the following equations:

λUuLu+Uyv+Uzwλ2Reu+λp=0λUvLvλ2Rev+py=0λUwLwλ2Rew+pz=0λu+vy+wz=0(5.2)

where u, υ, and w are components of the velocity vector v in the directions of x, y, and z, respectively,

L=iω+1Re2y2+2z2

and Re is the Reynolds number.

Studying the spatial stability, one usually fixes some real ω and considers system (5.2) as an eigenvalue problem for the eigenvalue λ and the eigenvector (u, υ, w, p)T. For definiteness sake, let ω > 0. In this case the finite eigenvalues of problem (5.2) having physical interest satisfy the inequalities Imag λ > 0 and Real λ < 0. In physical sense, these conditions mean that disturbances do not reflect from the boundary x = ∞ and there are no sources there. Usually, one needs to obtain several such eigenvalues and eigenvectors. We consider problem (5.2) with ω = 0.1 and Re = 3000. Note that the considered Poiseuille flow is linearly stable for any finite Reynolds number [1, 2, 12, 13].

In the theory of hydrodynamic stability the eigenvalue λ is usually represented as iα, which is more convenient for physical interpretation of disturbances of form (5.1). However, in this paper we do not use this notation because our aim is only to demonstrate the use of the proposed algorithms.

Since the profile U(y, z) of the Poiseuille flow velocity is an even function of y and z, equations (5.2) admit solutions possessing four different symmetries [1, 2, 12, 13]. Solutions of the form

(u+,v++,w,p+)

where, for example, u−+ is an odd function with respect to y and an even one with respect to z, are the smallest stable ones. Below we consider solutions with the above symmetry.

By approximating equations (5.2) in y and z with the second order finite differences on staggered grids of type C with m inner nodes in each direction. We obtain a quadratic matrix pencil of the form

T(λ)=T1+λT2+λ2T3(5.3)

Taking into account the symmetry, we can reduce the order of pencil (5.3) in approximately four times. Thus, the order n of the considered pencil approximately equals m2.

Using the standard approach, we can transform quadratic pencil (5.3) to the linear pencil

T100IλT2T3I0(5.4)

with matrices of twice order. The eigenvalues of pencils (5.3) and (5.4) coincide, and after computing the eigenvectors of pencil (5.4) we can construct the eigenvectors of pencil (5.3). Therefore, all eigenvalues and eigenvectors of pencil (5.3) can be computed by the QZ-algorithm [8] applied to (5.4). However, this approach leads to large computational cost (we have to perform about 1600n3 double precision arithmetic operations and store about 32n2 double precision numbers) and, hence, it is applicable only to approximations on a sufficiently coarse grid.

In Figure 1 from the left, the symbols ◻ denote all finite eigenvalues of pencil (5.3) computed by the QZ-algorithm for m = 50. These computations took about 340 seconds. From the right, the same symbols indicate p = 5 leading eigenvalues from those of physical interest. In addition, the right-hand side of Figure 1 shows the same five eigenvalues computed on finer grids with the use of combination of Algorithms 1 and 3. To do that, we applied the change of variables λλ + λ0 with λ0 = −0.02 + 0.13i and reduce pencil (5.3) to a pencil of the form

A(λ)=A0+λA1+λ2A2(5.5)

with the matrices A0=T0+λ0T1+λ02T2, A1 = T1 + 2λ0T2, and A2 = T2, and also computed its p = 5 minimal in magnitude finite eigenvalues with the use of Algorithms 1 and 3. After that, using the inverse change of variables λλλ0, we obtained the required eigenvalues of pencil (5.3).

Fig. 1 Left, all finite eigenvalues of pencil (5.3) computed by the QZ-algorithm with m = 50 (◻); right, five finite eigenvalues closest to the point λ0 = −0.02 + 0.13i (∗) computed by a combination of Algorithms 1 and 3 for m = 100 (+), 200 (×), 400 (∘), and 800 (⋄).
Fig. 1

Left, all finite eigenvalues of pencil (5.3) computed by the QZ-algorithm with m = 50 (◻); right, five finite eigenvalues closest to the point λ0 = −0.02 + 0.13i (∗) computed by a combination of Algorithms 1 and 3 for m = 100 (+), 200 (×), 400 (∘), and 800 (⋄).

In accordance with Section 4, Algorithm 1 was used for computing a good initial guess for Algorithm 3. First we applied 10 iterations of Algorithm 1 and computed the initial guess, then we applied 5 iterations of Algorithm 3. At each iteration of Algorithm 1, block systems (2.1) were solved based on complete LU-decomposition (2.4) of the matrix A0. At each iteration of Algorithm 3, generalized Sylvester equation (3.2) was solved approximately by the algorithm described in Section 3 using GMRES with right preconditioning using the same LU-decomposition of the matrix A0 for k. The maximal dimension of Krylov bases in all experiments was set to 50, the parameter δ in (3.7) was equal to 10−5. In the stopping criteria we used ε = a10−14, where a denotes the maximal absolute value of the entries of A0.

Table 1 illustrates the computational cost of the combination of Algorithms 1 and 3 in computing the approximate invariant pair (X, Λ) corresponding to p = 5 finite eigenvalues (see Fig. 1, right) of pencil (5.5), where ‖R2 is the second norm of the final residual R = A0X + A1 + A22, Ntotal is the total number of multiplications of the matrix A01 by a column (based on the LU-decomposition), the solution of all block systems is reduced to such multiplications (these computations formed the main part of total computational costs), Ttotal is the computational time in seconds. Algorithm 2 used at Step 3 of Algorithm 1 and Step 6 of Algorithm 3 always converged in 3–4 iterations for the threshold norm of the residual ε~=ε.

For m = 800 the solid line in Fig. 2 presents the dependence of the second norm of the residual RkRk=A0Xk+A1XkΛk+A2XkΛk2 for the approximate invariant pair (Xk, Λk) computed at the kth iteration of the combination of Algorithms 1 and 3 on the number k of that iteration. In this case we have used continuous numbering of iterations. The obtained results show high efficiency of the proposed method including only a slight dependence of the number of particular systems that need to be solved on the order n of matrices of the pencil. It should be noted that for m = 800 the norm of the nonlinear term of the final residual was approximately 4.1×10−5. That is, the nonlinear term of pencil (5.5) played a significant role in the considered partial eigenvalue problem.

Fig. 2 Convergence of the combination of Algorithms 1 and 3 applied to quadratic pencil (5.5) for m = 800.
Fig. 2

Convergence of the combination of Algorithms 1 and 3 applied to quadratic pencil (5.5) for m = 800.

Tab. 1

Computational cost of the combination of Algorithms 1 and 3.

m100200400800
n1044 × 1041.6 × 1056.4 × 105
R21.9 × 10−124.7 × 10−121.6 × 10−116.3 × 10−11
Ntotal217223221226
Ttotal, s1629142

6 Conclusion

The variant of inverse subspace iteration and block Newton’s method proposed previously in [4] for computing the invariant pairs of regular linear matrix pencils are directly generalized in this paper to the case of regular nonlinear matrix pencils. The computation efficiency of these methods is illustrated on the example of a typical quadratic eigenvalue problem appearing in the spatial hydrodynamic stability analysis.

Complete LU-decomposition (2.4) of the matrix A0 was used in the solution of block systems in Algorithm 1 and in construction of the preconditioner for GMRES in solution of generalized Sylvester equations in Algorithm 3. However, if the order of the matrices of the pencil is sufficiently large, then one should use the incomplete LU-decomposition. In this case, block systems in Algorithm 1 can be solved by GMRES with the use of the right preconditioning and tuning as this was proposed in [4].

The algorithms proposed in this paper are designed to compute the minimal invariant pairs of index l = 1. In order to compute minimal invariant pairs of index l > 1, these algorithms should be used together with the deflation procedure proposed in [10].

  1. Funding: The work was supported by the Russian Foundation for Basic Research (projects No. 16–01–00572, 16–31–60092) (development and justification of the proposed method) and by the Russian Science Foundation (project No. 17–71–20149) (numerical experiments, analysis of the obtained results).

References

[1] K. V. Demyanko and Yu. M. Nechepurenko, Dependence of the linear stability of Poiseuille flows in a rectangular duct on the cross-sectional aspect ratio. Doklady. Phys. 56 (2011), 531–533.10.1134/S1028335811100077Suche in Google Scholar

[2] K. V. Demyanko and Yu. M. Nechepurenko, Linear stability analysis of Poiseuille flow in a rectangular duct. Russ. J. Numer. Anal. Math. Modelling28 (2013), 125–148.10.1515/rnam-2013-0008Suche in Google Scholar

[3] K. V. Demyanko, Yu. M. Nechepurenko, and M. Sadkane, Inverse subspace bi-iteration and bi-Newton methods for computing spectral projectors. Comput. Math. Appl. 69 (2015), No. 7, 592–600.10.1016/j.camwa.2015.02.002Suche in Google Scholar

[4] K. V. Demyanko, Yu. M. Nechepurenko, and M. Sadkane, A Newton-like method for computing deflating subspaces. J. Numer. Math. 23 (2015), 289–301.10.1515/jnma-2015-0019Suche in Google Scholar

[5] K. V. Demyanko, Yu. M. Nechepurenko, and M. Sadkane, A Newton-type method for nonlinear eigenproblems. Russ. J. Numer. Anal. Math. Modelling32 (2017), No. 4, 1–8.10.1515/rnam-2017-0022Suche in Google Scholar

[6] P. G. Drazin, Introduction to Hydrodynamic Stability. Cambridge University Press, Cambridge, 2002.10.1017/CBO9780511809064Suche in Google Scholar

[7] S. K. Godunov and Yu. M. Nechepurenko, Bounds for the convergence rate of Newton’s method for calculating invariant subspaces. Comp. Maths. Math. Phys. 42 (2002), No. 6, 739–746.Suche in Google Scholar

[8] G. H. Golub and Ch. Van Loan, Matrix Computations. The John Hopkins University Press, Baltimore–London, 1996.Suche in Google Scholar

[9] G. El Khoury, Yu. M. Nechepurenko, and M. Sadkane, Acceleration of inverse subspace iteration with Newton’s method. J. Comput. Appl. Math. 259 (2014), 205–215.10.1016/j.cam.2013.06.046Suche in Google Scholar

[10] D. Kressner, A block Newton method for nonlinear eigenvalue problems. Numerische Mathematik114 (2009), 355–372.10.1007/s00211-009-0259-xSuche in Google Scholar

[11] Y. Saad, Iterative Methods for Sparse Linear Systems. SIAM, Philadelphia, PA, 2003.10.1137/1.9780898718003Suche in Google Scholar

[12] T. Tatsumi and T. Yoshimura, Stability of the laminar flow in a rectangular duct. J. Fluid Mech. 212 (1990), 437–449.10.1017/S002211209000204XSuche in Google Scholar

[13] V. Theofilis, P. W. Duck, and J. Owen, Viscous linear stability analysis of rectangular duct and cavity flow. J. Fluid Mech. 505 (2004), 249–286.10.1017/S002211200400850XSuche in Google Scholar

Received: 2017-11-9
Accepted: 2017-11-26
Published Online: 2018-2-21
Published in Print: 2018-2-23

© 2018 Walter de Gruyter GmbH, Berlin/Boston

Heruntergeladen am 30.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/rnam-2018-0002/html
Button zum nach oben scrollen