Startseite On the spectrum of tridiagonal matrices with two-periodic main diagonal
Artikel Open Access

On the spectrum of tridiagonal matrices with two-periodic main diagonal

  • Alexander Dyachenko und Mikhail Tyaglov EMAIL logo
Veröffentlicht/Copyright: 4. Juli 2024

Abstract

We find the spectrum and eigenvectors of an arbitrary irreducible complex tridiagonal matrix with two-periodic main diagonal. This is expressed in terms of the spectrum and eigenvectors of the matrix with the same sub- and superdiagonals and zero main diagonal. Our result generalises some recent results where the latter matrix stemmed from certain discrete orthogonal polynomials including specific cases of the classical Krawtchouk and Hahn polynomials.

MSC 2010: 15A18; 15B05; 15A15

1 Introduction

Tridiagonal matrices play an important role in many areas of mathematics and physics. They appear as matrices of three-term recurrence relations of orthogonal polynomials, as discretisation of ordinary differential operators, as simple test matrices when the spectrum is known, etc.

In recent publications [7,8,11,12], the authors considered several tridiagonal matrices whose entries and eigenvalues are integers – and explicitly found the spectra of two-periodic diagonal perturbations of those matrices. The articles [7,11,12] dealt with the matrix of three-term recurrence relations of shifted symmetric Krawtchouk polynomials, widely known as the Sylvester-Kac (or the Clement) matrix (see, e.g., [15]). da Fonseca and Kılıç [8] took (modified) submatrices of the Sylvester-Kac matrix, which in fact stems from symmetric Hahn polynomials and earlier was considered by Cayley [3] (see [5] for details). Finally, the work [1] dealt with two-periodic diagonal perturbation of a matrix whose spectrum (in general, non-integer) was found by Oste and Van der Jeugt [14]. These results on two-periodic diagonal perturbations were obtained by the so-called right-eigenvector method that led the authors to sophisticated calculations.

At the same time, a careful reader may observe that all the matrices whose perturbations were considered in [1,7,8,11,12] have zero main diagonals. Moreover, the eigenvalues of the perturbed matrices actually depend on spectra of the unperturbed ones. Bearing these observations in mind, in the present note, we study two-periodic diagonal perturbations of arbitrary irreducible[1] complex tridiagonal matrices. We express the spectrum and eigenvectors of such perturbed matrices via the spectrum and eigenvectors of the original unperturbed matrices, which generalises the aforementioned results of the works [1,7,8,11,12].

This note is not a survey of the field, nor does it present a new approach to the subject. The note serves to derive, in an elementary and economical way, a general fact on certain tridiagonal matrix perturbation embracing some recent results for particular tridiagonal matrices. Our approach is minimalistic and provides a shorter and more simple proof, as well as allows us to deal with the eigenvectors of perturbed matrices.

This article is organised as follows. Section 2 is devoted to a brief review of spectral properties of tridiagonal matrices with zero main diagonal. In Section 3, we express the eigenvalues, eigenvectors, and first generalised eigenvectors of the perturbed matrix through those of the corresponding unperturbed matrix. In Section 4, we extend the results of Section 3 to tridiagonal matrices with two-periodic main diagonal.

Note that there is another approach – based on orthogonal polynomials – to the problem considered in the present work that leads to more detailed results for (generalised) eigenvectors. Moreover, there is a direct relation of our Theorem 3.1 to the striking result [4] and to what is now called quadratic decomposition of orthogonal polynomials [13]. Although this approach proved to be fruitful and led to many publications, we intentionally retain ourselves from using it here as it is the subject of our further work [6].

2 Tridiagonal matrices with zero main diagonal

Consider an n × n complex irreducible tridiagonal matrix whose main diagonal only contains zero entries

(2.1) J n = 0 c 1 0 0 0 a 1 0 c 2 0 0 0 a 2 0 0 0 0 0 0 0 c n 1 0 0 0 a n 1 0 , a k , c k C \ { 0 } .

Since the main diagonal of J n is zero, the spectrum of J n is symmetric w.r.t. zero.

Theorem 2.1

The spectrum of the matrix J n has the form

(2.2) σ ( J 2 l ) = { ± λ 1 , ± λ 2 , , ± λ l } and σ ( J 2 l + 1 ) = { 0 , ± λ 1 , ± λ 2 , , ± λ l } .

Here, the numbers λ i are not necessarily distinct: each eigenvalue of J n appears as many times as its algebraic multiplicity. For even n , all λ i are non-zero.

Proof

Let χ k ( z ) , k = 1 , , n be the characteristic polynomial of the k th leading principal submatrix of J n . Then, the following three-term recurrence relations hold

(2.3) χ k + 1 ( z ) = z χ k ( z ) a k c k χ k 1 ( z ) , k = 0 , 1 , , n 1 ,

with χ 1 ( z ) 0 , χ 0 ( z ) 1 . From (2.3), it follows that the polynomials χ k ( z ) do not depend on a k and c k separately – only on the product a k c k . Therefore, the matrices J n and J n have the same eigenvalues, and hence, the spectrum of J n is symmetric w.r.t. 0. In particular, if n is odd, the matrix J n is singular.

However,

χ 2 l ( 0 ) = det ( J 2 l ) = ( 1 ) l k = 1 l a 2 k 1 c 2 k 1 0 ,

so for even n , the matrix J n is non-singular.□

At the same time, the matrix J 2 l + 1 may have the zero eigenvalue of any odd multiplicity.

Example 2.2

When n is odd, Matrix (2.1) can even be nilpotent. For instance, zero is the only eigenvalue of the matrix

(2.4) 0 1 0 0 0 1 0 1 0 0 0 1 0 4 0 0 0 1 0 2 0 0 0 1 0 .

It is folklore that any irreducible tridiagonal matrix M n is non-derogatory, i.e., all its eigenvalues are geometrically simple. Indeed, if λ is a an eigenvalue of M n , then M n λ I n , where I n is the n × n identity matrix, is singular but its submatrix obtained from M n λ I n by deleting, say, the first column and the last row, is regular. Thus, every eigenvalue has only one eigenvector.

From the form of Matrix (2.1) and its spectrum, it follows that the eigenvectors of the generalised eigenvectors[2] of the eigenvalues λ i and λ i are closely related. Define the matrix E n = { e i j } i , j = 1 n by

(2.5) e i j = ( 1 ) i 1 , if i = j , 0 , if i j ,

i.e., let E n = diag ( 1 , 1 , 1 , , ( 1 ) n 1 ) , then the following theorem holds.

Theorem 2.3

Let λ and λ be the eigenvalues of the matrix J n of multiplicity k 1 . If u 0 ( λ ) is the eigenvector and u j ( λ ) , j = 1 , , k 1 , are the generalised eigenvectors of J n corresponding to λ , then

(2.6) u j ( λ ) = ( 1 ) j E n u j ( λ ) , j = 0 , , k 1 ,

are the eigenvector and the generalised eigenvectors of J n corresponding to λ .

Remark 2.4

Given n = 2 l + 1 , the substitution of λ = 0 turns (2.6) into u 0 ( 0 ) = E 2 l + 1 u 0 ( 0 ) showing that the eigenvector u 0 ( 0 ) only has zero even components. Moreover, for j = 1 , 2 , , the corresponding generalised eigenvectors (they are defined non-uniquely[3]) may be (and will be) chosen so that all even components of u 2 j ( 0 ) and odd components of u 2 j + 1 ( 0 ) are equal to zero.

Proof of Theorem 2.3

Indeed, it is easy to see that

J n λ I n = E n ( J n + λ I n ) E n ,

where I n is the n × n identity matrix. By definition,

(2.7) ( J n λ I n ) u 0 ( λ ) = 0 , ( J n λ I n ) u j ( λ ) = u j 1 ( λ ) , j = 1 , , k 1 ,

so we have

E n ( J n + λ I n ) E n u 0 ( λ ) = 0 , E n ( J n + λ I n ) E n u j ( λ ) = u j 1 ( λ ) , j = 1 , , k 1 .

After the left multiplication by E n , we therefore arrive at

( J n + λ I n ) E n u 0 ( λ ) = 0 , ( J n + λ I n ) E n u j ( λ ) = E n u j 1 ( λ ) , j = 1 , , k 1 ,

which on writing (2.6) fits the definition of the generalised eigenvectors u j ( λ ) :

( J n + λ I n ) u 0 ( λ ) = 0 , ( J n + λ I n ) u j ( λ ) = u j 1 ( λ ) , j = 1 , , k 1 ,

as required.□

3 Tridiagonal matrices with alternating signs on the main diagonal

In this section, we study the spectrum and (generalised) eigenvectors of the matrix

(3.1) A n = J n + x E n ,

where the matrices J n and E n = { e i j } i , j = 1 n are defined in (2.1) and (2.5), respectively, so the main diagonal of the matrix A n contains entries of the same non-zero absolute value and of alternating signs.

3.1 Eigenvalues

Theorem 3.1

The spectrum of the matrix A n defined in (3.1) has the form

(3.2) σ ( A 2 l ) = { ± λ 1 2 + x 2 , ± λ 2 2 + x 2 , , ± λ l 2 + x 2 }

and

(3.3) σ ( A 2 l + 1 ) = { x , ± λ 1 2 + x 2 , ± λ 2 2 + x 2 , , ± λ l 2 + x 2 } ,

for l = n 2 , where { λ i } i = 1 l belongs to the spectrum of J n , see (2.2).

In particular, if x 2 = λ j 2 0 for some j , then 0 is an eigenvalue of A n of even multiplicity. Moreover, for n = 2 l + 1 , some λ k may vanish, in which case Formulae (3.2)–(3.3) show the existence of eigenvalues ± x : the eigenvalue x of A 2 l + 1 has odd multiplicity, while x is its eigenvalue of even multiplicity.

Proof

Note that the easily verifiable identity J n E n + E n J n = 0 implies that

(3.4) A n 2 = ( J n + x E n ) 2 = J n 2 + x J n E n + x E n J n + x 2 I n = J n 2 + x 2 I n ,

where I n is the n × n identity matrix.

Let n = 2 l . In this case, λ k 0 for any k = 1 , , l (see Theorem 2.1). Suppose first that all λ k 2 are distinct. From (3.4), it follows that all the eigenvalues of A 2 l 2 are double and equal to λ k 2 + x 2 for some k = 1 , , l . According to[4] [9, Chapter VIII, §6–7] (see also [10, p. 467–471]), the spectrum of the matrix A 2 l can only contain the numbers of the form ± λ k 2 + x 2 with k = 1 , , l , while for each k at least one of the numbers λ k 2 + x 2 and λ k 2 + x 2 lies in σ ( A 2 l ) . Our aim is to show that for each k , both these numbers are indeed in the spectrum of A 2 l (i.e., (3.2) holds), so that all eigenvalues of A 2 l are simple possibly excluding the double eigenvalue 0 corresponding to x 2 = λ k 2 . We do this by finding an explicit expression for the eigenvector of A 2 l for each eigenvalue.

Let μ = λ k 2 + x 2 0 for some k and some fixed branch of the square root. If μ is an eigenvalue of A 2 l , then there exists a corresponding eigenvector v 0 ( μ ) satisfying

A 2 l 2 v 0 ( μ ) = μ 2 v 0 ( μ ) = ( λ k 2 + x 2 ) v 0 ( μ ) .

Then, due to

A 2 l 2 u 0 ( λ k ) = ( λ k 2 + x 2 ) u 0 ( λ k ) and A 2 l 2 u 0 ( λ k ) = ( λ k 2 + x 2 ) u 0 ( λ k ) ,

where the eigenvectors u 0 ( λ k ) and u 0 ( λ k ) of J 2 l correspond to λ k and λ k , respectively, we have

v 0 ( μ ) = α u 0 ( λ k ) + β u 0 ( λ k ) ,

for a certain choice of the coefficients α and β with α + β 0 . Therefore,

μ ( α u 0 ( λ k ) + β u 0 ( λ k ) ) = μ v 0 ( μ ) = A 2 l v 0 ( μ ) = ( J 2 l + x E 2 l ) ( α u 0 ( λ k ) + β u 0 ( λ k ) ) = α J 2 l u 0 ( λ k ) + α x E 2 l u 0 ( λ k ) + β J 2 l u 0 ( λ k ) + β x E 2 l u 0 ( λ k ) = ( α λ k + β x ) u 0 ( λ k ) + ( α x β λ k ) u 0 ( λ k ) .

So, for α and β , we obtain a linear homogeneous 2 × 2 system with determinant ( μ λ k ) ( μ + λ k ) x 2 = 0 , thus necessarily having a nontrivial solution. This solution is unique (up to scaling) as at least one of the coefficients μ λ k and μ + λ k of the linear system is nonzero due to λ k 0 .

We treat x as a perturbation, and hence, we want our expression to remain valid as x 0 , i.e., as A 2 l J 2 l . For that reason, we fix the branch[5] of the square root in μ so that lim x 0 μ = λ k . This condition immediately yields α 0 . On letting α = 1 , we obtain β = x λ k + μ , which is β = μ λ k x for x 0 . So,

v 0 ( μ ) = u 0 ( λ k ) + x λ k + μ u 0 ( λ k ) ,

and, on taking another branch of the square root (replacing μ with μ ), it follows that v 0 ( μ ) may be set

u 0 ( λ k ) + x λ k μ u 0 ( λ k ) = u 0 ( λ k ) λ k + μ x u 0 ( λ k ) .

The last expression for v 0 ( μ ) degenerates as x 0 : to keep it continuous, one needs to stretch the last expression by the factor x λ k + μ , which, in essence, replaces λ k with λ k and relies on that lim x 0 ( μ ) = λ k :

v 0 ( μ ) = u 0 ( λ k ) x λ k + μ u 0 ( λ k ) .

Thus, both numbers μ 0 and μ belong to the spectrum A 2 l provided that J 2 l has only simple eigenvalues.

Suppose now that x = τ i λ j for fixed j and τ { 1 , 1 } , while u 0 ( λ j ) and u 0 ( λ j ) are the eigenvectors of J 2 l corresponding to the eigenvalues λ j and λ j , respectively. Then, it is easy to see that

A 2 l ( u 0 ( λ j ) + τ i u 0 ( λ j ) ) = ( J 2 l + τ i λ j E 2 l ) ( u 0 ( λ j ) + τ i u 0 ( λ j ) ) = 0

and

1 2 λ j A 2 l ( u 0 ( λ j ) τ i u 0 ( λ j ) ) = u 0 ( λ j ) + τ i u 0 ( λ j ) ,

so

(3.5) v 0 ( 0 ) = u 0 ( λ j ) + τ i u 0 ( λ j ) and v 1 ( 0 ) = u 0 ( λ j ) τ i u 0 ( λ j ) 2 λ j

are, respectively, the eigenvector and generalised eigenvector of the zero eigenvalue of the matrix A 2 l .

Thus, if all eigenvalues ± λ k of J 2 l are simple, then A 2 l has simple non-zero eigenvalues of the form ± λ k 2 + x 2 (and, in the case, x = ± i λ j , a double zero eigenvalue). Now, if some λ k is an eigenvalue of J 2 l of multiplicity r and x 2 λ k 2 , then due to continuous dependence of the characteristic polynomial on its roots , the eigenvalues ± λ k 2 + x 2 of A 2 l are also of multiplicity r . Analogously, if x 2 = λ j 2 and λ j is an eigenvalue of J 2 l of multiplicity r , then by continuity, the eigenvalue μ = 0 of A 2 l is of multiplicity 2 r .

Suppose now that n = 2 l + 1 , and hence, J 2 l + 1 is singular by Theorem 2.1. So, it follows from (3.4) that the set

{ ± x , ± λ 1 2 + x 2 , ± λ 2 2 + x 2 , , ± λ l 2 + x 2 }

contains all possible eigenvalues of the matrix A 2 l + 1 , since only square roots of the eigenvalues of A 2 l + 1 2 can be eigenvalues of A 2 l + 1 (see [9, Chapter VIII, §6–7]). Similarly to the case of n = 2 l , if non-zero eigenvalues ± λ k of J 2 l + 1 have multiplicity r , one can show that the matrix A 2 l + 1 has eigenvalues ± λ k 2 + x 2 of multiplicity r for x 2 λ k 2 , or the zero eigenvalue of multiplicity 2 r for x 2 = λ k 2 . (In fact, the relations between the corresponding eigenvectors of A n and J n for n = 2 l + 1 remain the same as for n = 2 l .)

Moreover, μ = x is always an eigenvalue of A 2 l + 1 . Indeed, if J 2 l + 1 u 0 ( 0 ) = 0 , then E 2 l + 1 u 0 ( 0 ) = u 0 ( 0 ) according to Remark 2.4, and hence,

(3.6) A 2 l + 1 u 0 ( 0 ) = J 2 l + 1 u 0 ( 0 ) + x E 2 l + 1 u 0 ( 0 ) = x u 0 ( 0 ) .

Suppose now that J 2 l + 1 has a zero eigenvalue of multiplicity 3, and

J 2 l + 1 u 0 ( 0 ) = 0 , J 2 l + 1 u 1 ( 0 ) = u 0 ( 0 ) , J 2 l + 1 u 2 ( 0 ) = u 1 ( 0 ) .

In this case, x is also an eigenvalue of the matrix A 2 l + 1 with the eigenvector v 0 ( x ) = u 0 ( 0 ) 2 x u 1 ( 0 ) . Indeed,

A 2 l + 1 ( u 0 ( 0 ) 2 x u 1 ( 0 ) ) = x u 0 ( 0 ) 2 x ( J 2 l + 1 u 1 ( 0 ) + x E 2 l + 1 u 1 ( 0 ) ) = x u 0 ( 0 ) 2 x ( u 0 ( 0 ) x u 1 ( 0 ) ) = x ( u 0 ( 0 ) 2 x u 1 ( 0 ) ) .

Moreover, in this case, x is an eigenvalue of A 2 l + 1 of multiplicity at least 2, and by (2.6), the vector v 1 ( x ) = u 1 ( 0 ) + 2 x u 2 ( 0 ) is indeed a generalised eigenvector of A 2 l + 1 corresponding to the eigenvalue x as

( A 2 l + 1 x I 2 l + 1 ) v 1 ( x ) = ( J 2 l + 1 + x E 2 l + 1 x I 2 l + 1 ) v 1 ( x ) = u 0 ( 0 ) + 2 x u 1 ( 0 ) x u 1 ( 0 ) + 2 x 2 u 2 ( 0 ) x u 1 ( 0 ) 2 x 2 u 2 ( 0 ) = u 0 ( 0 ) .

Hence, if 0 is a triple eigenvalue of J 2 l + 1 , then x is double and x is simple due to multiplicities of the other eigenvalues of A 2 l + 1 .

Now by continuity, we obtain that if 0 is an eigenvalue of J 2 l + 1 of multiplicity 2 r + 1 , r 0 , then x is an eigenvalue of A 2 l + 1 of multiplicity r + 1 , while x is of multiplicity r . Consequently, Formulae (3.2)–(3.3) completely describe the spectrum of A n .□

As a consequence of this theorem, one obtains the following Formulae generalising the result of [11].

Corollary 3.2

For the matrix A n defined in (3.1),

det A 2 l = ( 1 ) l k = 1 l ( x 2 + λ k 2 ) and det A 2 l + 1 = ( 1 ) l x k = 1 l ( x 2 + λ k 2 ) ,

where some of the numbers λ k in the second product can be zero.

3.2 Eigenvectors and generalised eigenvectors

Let us list the explicit expressions for eigenvectors and first generalised eigenvectors for all possible eigenvalues of A n = J n + x E n .

(1) μ = x . According to Theorem 3.1, if λ = 0 is an eigenvalue of J n of multiplicity 2 r + 1 , then μ = x is an eigenvalue of A n of multiplicity r + 1 . In the proof of that theorem, we showed that the eigenvector and first generalised eigenvector of A n corresponding to μ = x are

v 0 ( x ) = u 0 ( 0 ) , v 1 ( x ) = u 1 ( 0 ) + 2 x u 2 ( 0 ) ,

where u k ( 0 ) , k = 0 , 1 , 2 , are the eigenvector and generalised eigenvectors of J n corresponding to the eigenvalue λ = 0 .

(2) μ = x . By Theorem 3.1, it is an eigenvalue of A n only if λ = 0 is an eigenvalue of J n of multiplicity at least 3; if multiplicity of λ = 0 is at least 5, then μ = x is a multiple eigenvalue of A n . In the proof of Theorem 3.1, we showed that the eigenvector of A n corresponding to μ = x has the form

v 0 ( x ) = u 0 ( 0 ) 2 x u 1 ( 0 ) .

Let us find v 1 ( x ) . Since, by definition,

(3.7) ( A n + x I n ) v 1 ( x ) = v 0 ( x ) ,

with use of (3.4), one has

0 = ( A n + x I n ) 2 v 1 ( x ) = ( J n 2 + x 2 I n ) v 1 ( x ) 2 x 2 v 1 ( x ) + 2 x v 0 ( x ) + x 2 v 1 ( x ) = J n 2 v 1 ( x ) + 2 x v 0 ( x ) ,

so that

J n 2 v 1 ( x ) = 2 x v 0 ( x ) = 2 x u 0 ( 0 ) + 4 x 2 u 1 ( 0 ) .

Therefore, the vector v 1 ( x ) is a linear combination of the vectors u k ( 0 ) , k = 0 , , 3 ,

(3.8) v 1 ( x ) = α u 0 ( 0 ) + β u 1 ( 0 ) 2 x u 2 ( 0 ) + 4 x 2 u 3 ( 0 ) ,

since J n 2 u 1 ( 0 ) = J n 2 u 0 ( 0 ) = 0 and J n 2 u k + 2 ( 0 ) = u k ( 0 ) for k = 0 , 1 , . At the same time,

( A n + x I n ) u 1 ( 0 ) = ( J n + x E n + x I n ) u 1 ( 0 ) = u 0 ( 0 ) ,

and hence, plugging (3.8) into (3.7) (with α = 0 ) gives us

v 1 ( x ) = u 1 ( 0 ) 2 x u 2 ( 0 ) + 4 x 2 u 3 ( 0 ) .

(3) μ = 0 . If λ j 0 and x = ± i λ j , then by (3.5), the correspondent eigenvector and first generalised eigenvectors of A n are

(3.9) v 0 ( 0 ) = u 0 ( λ j ) ± i u 0 ( λ j ) and v 1 ( 0 ) = u 0 ( λ j ) i u 0 ( λ j ) 2 λ j .

(4) μ = x 2 + λ k 2 for some eigenvalue λ k 0 of the matrix J n , where we choose any fixed branch of the complex square root. From the proof of Theorem 3.1, we know that

v 0 ( μ ) = u 0 ( λ k ) + x λ k + μ u 0 ( λ k ) .

If μ is a multiple eigenvalue, the same approach we used to find the eigenvector allows us to express the first generalised eigenvector of A n corresponding to μ as combinations of the eigenvectors and generalised eigenvectors of J n corresponding to λ k . Namely, let

J n u 1 ( λ k ) = λ k u 1 ( λ k ) + u 0 ( λ k ) .

If v 1 ( μ ) satisfying A n v 1 ( μ ) = μ v 1 ( μ ) + v 0 ( μ ) is sought in the form

v 1 ( μ ) = α u 1 ( λ k ) + β u 1 ( λ k ) + γ u 0 ( λ k ) + δ u 0 ( λ k ) ,

then the same approach as earlier yields

(3.10) v 1 ( μ ) = 1 2 λ k u 0 ( λ k ) x λ k + μ u 0 ( λ k ) + μ λ k u 1 ( λ k ) x λ k + μ u 1 ( λ k ) .

Here, the chosen values of α , β , γ , and δ are natural in the sense that v 0 ( μ ) and v 1 ( μ ) become (3.9) as μ 0 and do not degenerate as x 0 .

Analogously, one obtains

v 0 ( μ ) = u 0 ( λ k ) x λ k + μ u 0 ( λ k ) and v 1 ( μ ) = 1 2 λ k x λ k + μ u 0 ( λ k ) + u 0 ( λ k ) + μ λ k x λ k + μ u 1 ( λ k ) + u 1 ( λ k ) .

We note the choice of generalised eigenvectors is non-unique , and the expression (3.10) may be replaced with a different (and in a sense more general) formula considered in our forthcoming publication [6]. That publication also gives a detailed description of the generalised eigenvectors corresponding to the eigenvalues ± x of A 2 l + 1 induced by the non-simple eigenvalue λ = 0 of J 2 l + 1 .

4 Tridiagonal matrices with two-periodic main diagonal

Consider now the matrix

B n = b 1 c 1 0 0 0 a 1 b 2 c 2 0 0 0 a 2 b 3 0 0 0 0 0 b n 1 c n 1 0 0 0 a n 1 b n , a k , c k C \ { 0 } , b k = x , if k is odd , y , if k is even .

It is easy to see that

B n = J n + x y 2 E n + x + y 2 I n ,

where J n is defined in (2.1), so from (3.2)–(3.3), we obtain

(4.1) σ ( B 2 l ) = x + y 2 ± 1 2 4 λ 1 2 + ( x y ) 2 , x + y 2 ± 1 2 4 λ 2 2 + ( x y ) 2 , , x + y 2 ± 1 2 4 λ l 2 + ( x y ) 2

and

(4.2) σ ( B 2 l + 1 ) = x , x + y 2 ± 1 2 4 λ 1 2 + ( x y ) 2 , x + y 2 ± 1 2 4 λ 2 2 + ( x y ) 2 , , x + y 2 ± 1 2 4 λ l 2 + ( x y ) 2 .

These Formulae can be obtained from (3.2)–(3.3) by replacing x with x y 2 and then by adding x + y 2 to all the eigenvalues.

Since the determinant of a matrix is equal to the product of its eigenvalues, from (4.1)–(4.2) one immediately obtains that

(4.3) det B 2 l = k = 1 l ( x y λ k 2 ) , and det B 2 l + 1 = x k = 1 l ( x y λ k 2 ) .

On letting J n to be the Sylvester-Kac matrix or its main principal submatrix, formulae (4.1)–(4.3) generalise the results of the works [7,12], as well as an analogous transition in [8].

Observe that the eigenvectors and generalised eigenvectors of the matrices A n and B n are related through replacing x with x y 2 .

The left eigenvectors of A n and B n may be obtained using the following remark.

Remark 4.1

Note that if the (right) eigenvectors and generalised eigenvectors of some irreducible tridiagonal matrix are known, then it is easy to find its left eigenvectors: i.e., the eigenvectors of a matrix

J n = b 1 c 1 0 0 0 a 1 b 2 c 2 0 0 0 a 2 b 3 0 0 0 0 0 b n 1 c n 1 0 0 0 a n 1 b n , a k , c k C \ { 0 }

are related to the eigenvectors of its transpose J n T . It is clear that the spectra of J n T and J n coincide. So if u ˜ 0 ( λ ) is the eigenvector of J n T corresponding to the eigenvalue λ , then the obvious formula

(4.4) J n T = D n 1 J n D n ,

where the diagonal matrix D has the form

D = d 1 0 0 0 0 0 d 2 0 0 0 0 0 d 3 0 0 0 0 0 d n 1 0 0 0 0 0 d n , with d 1 = 1 , d k + 1 = a 1 a 2 a k c 1 c 2 c k , k = 1 , , n 1 .

implies by induction that

u ˜ 0 ( λ ) = D n 1 u 0 ( λ ) , u ˜ j ( λ ) = D n 1 u j ( λ ) , j = 1 , , k 1 ,

where k is the multiplicity of the eigenvalue λ .

Acknowledgement

The authors would like to thank the anonymous referees for their helpful comments.

  1. Funding information: The results of Sections 2 and 3.1 were obtained with the support of the Russian Science Foundation Grant 19-71-30002. The work on Sections 3.2 and 4 was supported by the state assignment, Registration Number 122041100132-9.

  2. Author contributions: The authors contributed equally.

  3. Conflict of interest: The authors declare no conflicts of interest.

  4. Data availability statement: Not applicable.

References

[1] A. Alazemi, C. M. DaFonseca, and E. Kılıç, The spectrum of a new class of Sylvester-Kac matrices, Filomat 35 (2021), no. 12, 4017–4031. 10.2298/FIL2112017ASuche in Google Scholar

[2] R. Bronson, Matrix Methods: An Introduction, Academic Press, New York-London, 1969. Suche in Google Scholar

[3] A. Cayley, On the determination of the value of a certain determinant, Quart. Math. J. 2 (1858), 163–166. Suche in Google Scholar

[4] T. S. Chihara, On Kernel polynomials and related Systems, Boll. Unione Mat. Ital., III. Ser. 19 (1964), no. 4, 451–459. Suche in Google Scholar

[5] A. Dyachenko and M. Tyaglov, Linear differential operators with polynomial coefficients generating generalised Sylvester-Kac matrices, 2021, preprint, arXiv:2104.01216. Suche in Google Scholar

[6] A. Dyachenko and M. Tyaglov, Spectral properties of tridiagonal matrices with two-periodic main diagonal via orthogonal polynomials, in preparation. Suche in Google Scholar

[7] C. M. da Fonseca, A short note on the determinant of a Sylvester-Kac type matrix, Int. J. Nonlinear Sci. Numer. Simul. 21 (2020), no. 3–4, 361–362. 10.1515/ijnsns-2018-0375Suche in Google Scholar

[8] C. M. da Fonseca and E. Kılıç, A new type of Sylvester-Kac matrix and its spectrum, Linear Multilinear Algebra 69 (2021), no. 6, 1072–1082. 10.1080/03081087.2019.1620673Suche in Google Scholar

[9] F. R. Gantmacher, The theory of matrices, vol. 1, Translated from the Russian by K. A. Hirsch. Reprint of the 1959 translation. AMS Chelsea Publishing, Providence, RI, 2000. Suche in Google Scholar

[10] R. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge, UP, 1991. 10.1017/CBO9780511840371Suche in Google Scholar

[11] E. Kılıçc, Sylvester-tridiagonal matrix with alternating main diagonal entries and its spectra, Int. J. Nonlinear Sci. Numer. Simul. 14 (2013), no. 5, 261–266. 10.1515/ijnsns-2011-0068Suche in Google Scholar

[12] E. Kılıç and T. Arikan, Evaluation of spectrum of 2-periodic tridiagonal-Sylvester matrix, Turk. J. Math. 40 (2016), no. 1, 80–89. 10.3906/mat-1503-46Suche in Google Scholar

[13] P. Maroni, Sur la décomposition quadratique daune suite de polynômes orthogonaux. I, Riv. Mat. Pura Appl. 6 (1990), 19–53. Suche in Google Scholar

[14] R. Oste and J. Van der Jeugt, Tridiagonal test matrices for eigenvalue computations: Two-parameter extensions of the Clement matrix, J. Comput. Appl. Math. 314 (2017), 30–39. 10.1016/j.cam.2016.10.019Suche in Google Scholar

[15] O. Taussky and J. Todd, Another look at a matrix of Mark Kac, Linear Algebra Appl. 150 (1991), 341–360. 10.1016/0024-3795(91)90179-ZSuche in Google Scholar

Received: 2023-09-24
Revised: 2024-04-10
Accepted: 2024-05-02
Published Online: 2024-07-04

© 2024 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Research Articles
  2. The diameter of the Birkhoff polytope
  3. Determinants of tridiagonal matrices over some commutative finite chain rings
  4. The smallest singular value anomaly: The reasons behind sharp anomaly
  5. Idempotents which are products of two nilpotents
  6. Two-unitary complex Hadamard matrices of order 36
  7. Lih Wang's and Dittert's conjectures on permanents
  8. On a unified approach to homogeneous second-order linear difference equations with constant coefficients and some applications
  9. Matrix equation representation of the convolution equation and its unique solvability
  10. Disjoint sections of positive semidefinite matrices and their applications in linear statistical models
  11. On the spectrum of tridiagonal matrices with two-periodic main diagonal
  12. γ-Inverse graph of some mixed graphs
  13. On the Harary Estrada index of graphs
  14. Complex Palais matrix and a new unitary transform with bounded component norms
  15. Computing the matrix exponential with the double exponential formula
  16. Special Issue in honour of Frank Hall
  17. Editorial Note for the Special Issue in honor of Frank J. Hall
  18. Refined inertias of positive and hollow positive patterns
  19. The perturbation of Drazin inverse and dual Drazin inverse
  20. The minimum exponential atom-bond connectivity energy of trees
  21. Singular matrices possessing the triangle property
  22. On the spectral norm of a doubly stochastic matrix and level-k circulant matrix
  23. New constructions of nonregular cospectral graphs
  24. Variations in the sub-defect of doubly substochastic matrices
  25. Eigenpairs of adjacency matrices of balanced signed graphs
  26. Special Issue - Workshop on Spectral Graph Theory 2023 - In honor of Prof. Nair Abreu
  27. Editorial to Special issue “Workshop on Spectral Graph Theory 2023 – In honor of Prof. Nair Abreu”
  28. Eigenvalues of complex unit gain graphs and gain regularity
  29. Note on the product of the largest and the smallest eigenvalue of a graph
  30. Four-point condition matrices of edge-weighted trees
  31. On the Laplacian index of tadpole graphs
  32. Signed graphs with strong (anti-)reciprocal eigenvalue property
  33. Some results involving the Aα-eigenvalues for graphs and line graphs
  34. A generalization of the Graham-Pollak tree theorem to even-order Steiner distance
  35. Nonvanishing minors of eigenvector matrices and consequences
  36. A linear algorithm for obtaining the Laplacian eigenvalues of a cograph
  37. Selected open problems in continuous-time quantum walks
  38. On the minimum spectral radius of connected graphs of given order and size
  39. Graphs whose Laplacian eigenvalues are almost all 1 or 2
  40. A Laplacian eigenbasis for threshold graphs
Heruntergeladen am 3.11.2025 von https://www.degruyterbrill.com/document/doi/10.1515/spma-2024-0009/html?lang=de
Button zum nach oben scrollen