Home Eigenvalue Problem, Spin Systems, Lie Groups, and Parameter Dependence
Article Publicly Available

Eigenvalue Problem, Spin Systems, Lie Groups, and Parameter Dependence

  • Willi-Hans Steeb EMAIL logo and Yorick Hardy
Published/Copyright: May 26, 2015

Abstract

We study square matrices F(α) over ℂ with α∈ℝ, where the eigenvalues depend on the parameter α but not the eigenvectors, and vice versa, where the eigenvectors depend on the parameter α but not the eigenvalues. We derive sufficient conditions for such properties. Applications to Lie groups and spin systems are provided. Both normal and nonnormal matrices are investigated.

1 Introduction

Consider the n×n matrices (α∈ℝ),

F(α)=(f11(α)f12(α)f1n(α)f21(α)f22(α)f2n(α)fn1(α)fn2(α)fnn(α)),

where the functions (entries) fjk:ℝ → ℝ(j, k=1, …, n) are analytic. We construct matrices F(α) where all of the eigenvalues depend on the parameter α but the corresponding eigenvectors are independent of α. Vice versa, we construct matrices where all of the eigenvectors depend on α but the eigenvalues do not depend on α.

Such matrices play a central role for Lie groups [1–7]. Thus, we will focus mainly on matrices that occur in Lie groups, in particular, SO(n), O(n), SO(n, 1), and O(n, 1).

Note that we also have matrices such as

X(α)=1+α1+α2(α2α+1α2α2α2),Y(α)=(1α0α),

where for X(α), all eigenvalues and eigenvectors depend on α, and for Y(α), one eigenvalue–eigenvector pair does not depend on α and one pair does depend on α. Furthermore, although F(α) depends on α, both the eigenvectors and the eigenvalues may be independent of α. For example,

F(α)=(0α00)

has the eigenvalue 0 and eigenvectors (t, 0)T where t ≠ 0. However, this does not exclude the eigenvector (α, 0)T (provided α ≠ 0). To exclude this case, we will consider normalised eigenvectors.

An important special case we also study is given by F(α)=X + αY, where X and Y are n×n Hermitian matrices independent of α. This finds in particular applications for spin systems [8–10]. The question also plays a role for energy level motion (see [11, 12] and references therein) and exceptional and diabolic points (see [13] and references therein). The case F(α)=D + αN, where D is an n×n Hermitian diagonal matrix and N is an n×n Hermitian matrix with diagonal entries equal to 0, has been studied by Steeb and Louw [12].

Parameter differentiation for matrices, which plays the central role here, has been promoted by Aizu [14], Kumar [15], Wilcox [16], and Louisell [17].

2 Examples for Spin Systems

Let σ1, σ2, and σ3 be the Pauli spin matrices. Consider the eigenvalue problem for the spin Hamilton operator H^=ω1σ1σ1+ω3σ3σ3, where ⊗ denotes the Kronecker product [18]. Then we write with α=ω3/ω1,

K^=H^ω1=σ1σ1+ασ3σ3=(α0010α1001α0100α).

The eigenvalues of K^ are 1 + α, −1 + α, 1 − α, and −1 − α, with the corresponding normalised eigenvectors,

12(1001),12(1001),12(0110),12(0110).

Thus, the eigenvectors do not depend on α. These eigenvectors are the Bell states. They form an orthonormal basis in ℂ4. For the Hamilton operator in the Hilbert space ℂ8,

K^=H^ω1=σ1σ1σ1+ασ3σ3σ3,

both the eigenvalues and the eigenvectors depend on α. For the Hamilton operator in the Hilbert space ℂ16,

K^=H^ω1=σ1σ1σ1σ1+ασ3σ3σ3σ3.

The eigenvalues depend on α, whereas the eigenvectors do not depend on α (Steeb and Hardy [9]).

For the Hamilton operator constructed from spin-1 matrices (with ϕ fixed),

H^ω=(cosθsinθeiϕ/20sinθeiϕ/20sinθeiϕ/20sinθeiϕ/2cosθ).

The eigenvalues are +1, 0, and −1 with the corresponding normalised eigenvectors,

((1+cosθ)eiϕ/2sinθ/2(1cosθ)eiϕ/2),(sinθeiϕ/2cosθsinθeiϕ/2),((1cosθ)eiϕ/2sinθ/2(1+cosθ)eiϕ/2).

These vectors form an orthonormal basis in the Hilbert space ℂ3. For θ= 0 and ϕ=0, we obtain the standard basis in ℂ3.

3 Examples for Lie Groups

For Lie groups, the matrix F(α) is invertible for all α. Thus, we have

dF1(α)=F1(α)dF(α)F1(α).

Let us give two sets of examples to illustrate the dependence of the eigenvalues or eigenvectors on a parameter α. Consider the matrices

A(α)=(cos(α)sin(α)sin(α)cos(α)),B(α)=(cos(α)sin(α)sin(α)cos(α)).

The matrix A(α) is an element of the compact Lie group SO(2, ℝ). The matrix B(α) is an element of the compact Lie group O(2, ℝ), but not of SO(2, ℝ). For the matrix A(α), we have det(A(α))=1 and tr(A(α))=2cos(α). The eigenvalues of A(α) are given by λ+(α)=e, λ(α)=e−iα with the corresponding normalised eigenvectors

12(1i),12(1i).

Thus, the eigenvalues depend on α, but the eigenvectors do not depend on α. For the matrix B(α), we find that det(B(α))=−1 and tr(B(α))=0. The eigenvalues of B(α) are given by λ+=+1 and λ=−1, with the corresponding normalised eigenvectors,

(cos(α/2)sin(α/2)),(sin(α/2)cos(α/2)),

where we utilised sin(α) ≡ 2sin(α/2)cos(α/2) and cos(α) ≡ cos2(α/2) − sin2(α/2). Thus, the eigenvalues do not depend on α, but the eigenvectors do.

As the second set of examples consider the matrices,

C(α)=(cosh(α)sinh(α)sinh(α)cosh(α)),D(α)=(cosh(α)sinh(α)sinh(α)cosh(α)).

The matrix C(α) is an element of the noncompact Lie group SO(1, 1, ℝ). The matrix D(α) is an element of the noncompact Lie group O(1, 1, ℝ), but not of SO(1, 1, ℝ). For the matrix C(α), we have det(C(α))=1 and tr(C(α))=2cosh(α). The eigenvalues of C(α) are given by λ+(α)=eα and λ(α)=eα, with the corresponding normalised eigenvectors,

12(11),12(11).

Thus, the eigenvalues depend on α, but the eigenvectors do not depend on α. For the matrix D(α), we find that det(D(α))=−1 and tr(D(α))=0. The eigenvalues of D(α) are given by λ+=+1, λ=−1, with the corresponding normalised eigenvectors,

1coshα(cosh(α/2)sinh(α/2)),1coshα(sinh(α/2)cosh(α/2)),

where we utilised sinh(α) ≡ 2sinh(α/2)cosh(α/2) and cosh(α) ≡ cosh2(α/2) + sinh2(α/2). Thus, the eigenvalues do not depend on α, but the eigenvectors do. Note that the matrices A(α), B(α), and C(α) are normal matrices, whereas the matrix D(α) is only normal for α=0. The scalar product of the two normalised eigenvectors of D(α) is sinh(α)/cosh(α)≡tanh(α) and is thus 0 only for α=0. For α → ∞, the scalar product is equal to 1. We note that

exp(α(dA(α)dα|α=0))=A(α),exp(α(dB(α)dα|α=0))=C(α),exp(α(dC(α)dα|α=0))=C(α),exp(α(dD(α)dα|α=0))=A(α).

Taking the derivatives of the matrices A(α), B(α), C(α), and D(α), we obtain

A(α)dA(α)dα=dA(α)dαA(α)=(sin(2α)cos(2α)cos(2α)sin(2α)),B(α)dB(α)dα=dB(α)dαB(α)=(0110),C(α)dC(α)dα=dC(α)dαC(α)=(sinh(2α)cosh(2α)cosh(2α)sinh(2α)),D(α)dD(α)dα=dD(α)dαD(α)=(0110).

Hence, the commutator of A(α) and dA(α)/dα vanishes, the anticommutator of B(α) and dB(α)/dα vanishes, the commutator of C(α) and dC(α)/dα vanishes, and the anticommutator of D(α) and dD(α)/dα vanishes.

An extension would be to consider the matrix

(cn(α,k)/dn2(α,k)(2k21)sn(α,k)/dn(α,k)sn(α,k)/dn(α,k)cn(α,k)/dn2(α,k)),

which interpolates between the matrix A(α) and the matrix C(α), and sn(), cn(), and dn() denote the Jacobi elliptic functions. For k=0, we have the matrix A(α), and for k=1, we have the matrix C(α).

We can now utilise the Kronecker product ⊗ to construct 4×4 matrices (and higher dimensional matrices) from the 2×2 matrices A(α), B(α), C(α), and D(α), where the eigenvalues depend on α and the eigenvectors are independent of α, and vice versa, where the eigenvalues are independent of α and the eigenvectors depend on α. For example, the eigenvalues of the 4×4 matrix B(α) ⊗ B(α) do not depend on α, but the eigenvectors do. The star product defined for 2×2 matrices M(α) and N(α) as

M(α)*N(α)=(n11(α)00n12(α)0m11(α)m12(α)00m21(α)m22(α)0n21(α)00n22(α))

can also be utilised to construct 4×4 matrices with the corresponding parameter dependence. For example, consider A(α) given above. Then the eigenvalues of A(α)⁎A(α) depend on α, whereas the eigenvectors are independent of α.

4 Cases 2×2, 3×3, and 4×4 for Eigenvalues

Consider first the case n=2. Using the trace and determinant of the 2×2 matrix F(α), the two eigenvalues λ±(α) can be written as

λ±(α)=12tr(F(α))±det(F(α))+14(tr(F(α)))2,

where λ+(α) + λ(α)=tr(F(α)) and λ+(α) λ(α)=det(F(α)).

Proposition 1.Let F(α) be a 2×2 matrix. If tr(F(α)) and det(F(α)) are constants, then the eigenvalues of F(α) are independent of α.

Examples are the matrices B(α) and D(α). In the following, we will omit the α dependence of the fjk’s and λ’s. For n=3, we have the three well-known relations,

j=13λj=tr(F),S2=f11f22+f11f33+f22f33f12f21f13f31f23f32,j=13λj=det(F),

where S2=λ1λ2 + λ1λ3 + λ2λ3.

Proposition 2.Let F(α) be a 3×3 matrix. If tr(F), S2, and det(F) are constants, then the eigenvalues λ1, λ2, and λ3of F(α) are independent of α.

Consider now n=4. We set

S2:=λ1λ2+λ1λ3+λ1λ4+λ2λ3+λ2λ4+λ3λ4,S3:=λ1λ2λ3+λ1λ2λ4+λ2λ3λ.

Then we obtain

j=14λj=tr(F),S2=f11f22+f11f33+f11f44+f22f33+f22f44+f33f44f12f21f13f31f14f41f23f32f24f42f34f43,S3=f11f22f33f11f22f44f11f33f44f22f33f44+f12f21(f33+f44)+f23f32(f11+f44)+f13f31(f22+f44)+f24f42(f11+f33)+f14f41(f33+f22)+f34f43(f11+f22)f12f23f31f13f21f32f12f24f41f13f34f41f14f21f42f23f34f42f14f31f43f24f32f43j=14λj=det(F).

Proposition 3.Let F(α) be a 4×4 matrix. If tr(F), S2, S3, and det(F) are constants, then the eigenvalues λ1, λ2, λ3, and λ4of F(α) are independent of α.

An example is the 4×4 matrix B(α) ⊗ B(α), where ⊗ denotes the Kronecker product.

The eigenvalues of an n×n matrix M can also be found from the set of n equations,

tr(Mk)=λ1k+λ2k++λnk,  k=1,2,,n,

where we have the well-known fact that the trace of a square matrix is the sum of the eigenvalues for k=1. This provides us with a system of n equations for the n unknown eigenvalues λ1, …, λn. A Newton method could be used to solve this set of equations.

5 General Case

From the eigenvalue equation F(α)v(α)=λ(α)v(α), we obtain

dF(α)dαv(α)+F(α)dv(α)dα=dλ(α)dαv(α)+λ(α)dv(α)dα,

where we assumed that v(α) and λ(α) are analytic. We insist that the eigenvectors are normalised, i.e., ||v||2=v(α)*v(α)=1. Let v1(α), …, vm(α) be m linearly independent normalised (column) eigenvectors corresponding to the distinct eigenvalues λ1(α), …, λm(α) of the n×n matrix F(α). Let V(α)=[v1(α) … vm(α)] and Λ(α)=diag(λ1(α), …, λm(α)). The eigenvalue equations can be summarised as

F(α)V(α)V(α)Λ(α)=0n×m,

or equivalently

(1)(ImF(α)Λ(α)In)vec(V(α))=0, (1)

where vec(V(α)) is the column vector [v1(α)Tvm(α)T]T. Since vec(V(α)) may be written in the form

vec(V(α))=j=1mejvj,

where vj are the columns of V, the normalisation conditions can be rewritten as ||vec(V(α))||2=rank(V(α)).

5.1 Eigenvectors that do not Depend on α

Suppose that the eigenvector v does not depend on α. Then it follows that

dF(α)dαv=dλ(α)dαv,

which is an eigenvalue equation. Consequently, the eigenvalue λ(α) satisfies the two characteristic equations

det(λ(α)InF(α))=det(ddα(λ(α)InF(α)))=0.

Theorem 1.Ifvis an eigenvector, independent of α, of F(α), thenvis in the null space of both

λ(α)InF(α)andddα(λ(α)InF(α)).

Now suppose that all eigenvectors (i.e., V) do not depend on α. Then differentiating Equation (1) and applying the above theorem yield the following corollary.

Corollary 1.Let V be a matrix where the column vectors form a complete linearly independent set of eigenvectors of F(α) and let Λ(α) be the diagonal matrix with eigenvalues on the diagonal corresponding to the eigenvector columns in V. If all eigenvectors of F(α) are independent of α, then

(ImF(α)Λ(α)In)vec(V)=(ImdF(α)dαdΛ(α)dαIn)vec(V)=0.

Linear functions. For the special case F(α)=X + αY, we have (dλ(α)/dα)v=Yv, so that λ(α) is linear in α. The coefficient of α in λ(α) is an eigenvalue of Y with eigenvector v, and the constant term of λ(α) is an eigenvalue of X with eigenvector v. It follows that we must construct F(α) and Λj(α)=Dx + αDy such that

Xvj=(Dx)jjvj,Yvj=(Dy)jjvj.

Thus, we have the following corollary.

Corollary 2.Let F(α)=X + αY be linear in α. Every normalised eigenvectorvof F(α) is independent of α if and only ifvis simultaneously an eigenvector of X and Y.

Normal matrices. If F(α) is normal, then we have F(α)=UD(α)U* for the case that the eigenvectors v are independent of α and D(α) is a diagonal matrix depending on α, and U is a unitary matrix independent of α.

5.2 Eigenvalues that do not Depend on α

Suppose that the eigenvalue λ does not depend on α. Then dλ/dα=0, and we have

dF(α)dαv(α)+F(α)dv(α)dα=λdv(α)dα.

It follows that

(λInF(α))dv(α)dα=dF(α)dαv(α),

which is a differential equation for v(α), where the matrix (λInF(α)) is not invertible. Now suppose that all eigenvalues (i.e., Λ) do not depend on α. Then differentiating Equation (1) yields the following theorem.

Theorem 2.Let V(α) be a matrix where the column vectors form a complete linearly independent set of eigenvectors of F(α) and let Λ be the diagonal matrix with eigenvalues on the diagonal corresponding to the eigenvector columns in V. If all eigenvalues of F(α) are independent of α, then

(2)(ΛInImF(α))vec(V(α))=0,(ΛInImF(α))ddαvec(V(α))=(ImdF(α)dα)vec(V(α)). (2)

The matrix Λ ⊗ InImF(α) is clearly not invertible.

Involutions and Constant F2(α). If F2(α) is constant, then F(α)dF(α)/dα=−dF(α)/dαF(α). Multiplying Equation (2) by Λ ⊗ In + ImF(α) yields

(Λ2InImF2(α))ddαvec(V(α))=0.

Thus, the columns of dV(α)/dα are all eigenvectors of F2(α). Since Λ and F2(α) are constant, it follows that dV(α)/dα is constant, and V(α)=X + αY for appropriate matrices X and Y independent of α. Thus, we have the following corollary.

Corollary 3.If all of the eigenvalues of F(α) are independent of α, and F2(α) is independent of α, then the eigenvectors of F(α) are linear in α.

Normal matrices. If F(α)is normal, then we have F(α)=U(α)DU*(α) for the case that the eigenvalues are independent of α and the unitary matrix U(α) depends on α.

6 Summary

We have derived sufficient conditions for a square matrix F(α)=(Fjk(α)), such that the eigenvalues of F(α) depend on α and the eigenvectors are independent of α, and vice versa, such that the eigenvalues are independent of α and the eigenvectors are dependent on α. Several applications are provided.


Corresponding author: Willi-Hans Steeb, International School for Scientific Computing, University of Johannesburg, Auckland Park 2006, South Africa, E-mail:

Acknowledgments

The first author is supported by the National Research Foundation (NRF), South Africa. This work was based on research supported by the National Research Foundation. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors; therefore, the NRF do not accept any liability in regard thereto.

References

[1] N. Bourbaki, Elements of Mathematics: Lie Groups and Lie Algebras, Addison-Wesley, Reading 1975.Search in Google Scholar

[2] S. Helgason, Differential Geometry, Lie Groups and Symmetric Spaces, American Mathematical Society, Providence, RI 2001.10.1090/gsm/034Search in Google Scholar

[3] C. von Westenholz, Differential Forms in Mathematical Physics, revised edition, North-Holland, Amsterdam 1981.Search in Google Scholar

[4] D. Bump, Lie Groups, Springer, New York 2000.Search in Google Scholar

[5] R. Gilmore, Lie Groups, Lie Algebras and Some of Their Applications, Cambridge University Press, Cambridge 2008.Search in Google Scholar

[6] W.-H. Steeb, Continuous Symmetries, Lie Algebras, Differential Equations and Computer Algebra, (2nd ed.), World Scientific, Singapore 2007.10.1142/6515Search in Google Scholar

[7] W.-H. Steeb, I. Tanski, and Y. Hardy, Problems and Solutions for Groups, Lie Groups, Lie Algebras with Applications, World Scientific, Singapore 2012.10.1142/8378Search in Google Scholar

[8] W.-H. Steeb and Y. Hardy, Problems and Solutions in Quantum Computing and Quantum Information, (3rd ed.), World Scientific Publishing, Singapore 2011.10.1142/8249Search in Google Scholar

[9] W.-H. Steeb and Y. Hardy, Bose, Spin and Fermi Systems: Problems and Solutions, World Scientific, Singapore 2015.10.1142/9334Search in Google Scholar

[10] W.-H. Steeb and Y. Hardy, Quantum Mechanics Using Computer Algebra, (2nd ed.), World Scientific, Singapore 2010.10.1142/7751Search in Google Scholar

[11] W.-H. Steeb, A. J. van Tonder, C. M. Villet, and S. J. M. Brits, Found. Phys. Lett. 1, 147 (1988).Search in Google Scholar

[12] W.-H. Steeb and J. A. Louw, J. Phys. Soc. Jp. 56, 3082 (1987).Search in Google Scholar

[13] W.-H. Steeb, Phys. Scr. 81, 025012 (2010).Search in Google Scholar

[14] K. Aizu, J. Math. Phys. 4, 762 (1963).Search in Google Scholar

[15] K. Kumar, J. Math. Phys. 6, 1923 (1965).Search in Google Scholar

[16] R. M. Wilcox, J. Math. Phys. 8, 962 (1967).Search in Google Scholar

[17] W. H. Louisell, Quantum Statistical Properties of Radiation, J. Wiley, New York 1973.Search in Google Scholar

[18] W.-H. Steeb and Y. Hardy, Matrix Calculus and Kronecker Product: A Practical Approach to Linear and Multilinear Algebra, (2nd ed.), World Scientific, Singapore 2011.10.1142/8030Search in Google Scholar

Received: 2015-3-25
Accepted: 2015-4-26
Published Online: 2015-5-26
Published in Print: 2015-8-1

©2015 by De Gruyter

Downloaded on 1.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/zna-2015-0139/html
Scroll to top button