Startseite From asymptotic distribution and vague convergence to uniform convergence, with numerical applications
Artikel Open Access

From asymptotic distribution and vague convergence to uniform convergence, with numerical applications

  • Giovanni Barbarino , Sven-Erik Ekström , Carlo Garoni , David Meadon , Stefano Serra-Capizzano EMAIL logo und Paris Vassalos
Veröffentlicht/Copyright: 16. Juli 2025

Abstract

Let { Λ n = { λ 1 , n , , λ d n , n } } n be a sequence of finite multisets of real numbers such that d n → ∞ as n → ∞, and let f : Ω R d R be a Lebesgue measurable function defined on a domain Ω with 0 < μ d (Ω) < ∞, where μ d is the Lebesgue measure in R d . We say that { Λ n } n has an asymptotic distribution described by f, and we write { Λ n } n f , if (*) lim n 1 d n i = 1 d n F ( λ i , n ) = 1 μ d ( Ω ) Ω F ( f ( x ) ) d x for every continuous function F with bounded support. If Λ n is the spectrum of a matrix A n , we say that { A n } n has an asymptotic spectral distribution described by f and we write { A n } n λ f . In the case where d = 1, Ω is a bounded interval, Λ n f(Ω) for all n, and f satisfies suitable conditions, Bogoya, Böttcher, Grudsky, and Maximenko proved that the asymptotic distribution (*) implies the uniform convergence to 0 of the difference between the properly sorted vector [ λ 1 , n , , λ d n , n ] and the vector of samples [ f ( x 1 , n ) , , f ( x d n , n ) ] , i.e., (**) lim n max i = 1 , , d n | f ( x i , n ) λ τ n ( i ) , n | = 0 , where x 1 , n , , x d n , n is a uniform grid in Ω and τ n is the sorting permutation. We extend this result to the case where d ⩾ 1 and Ω is a Peano–Jordan measurable set (i.e., a bounded set with μ d (∂Ω) = 0). We also formulate and prove a uniform convergence result analogous to (**) in the more general case where the function f takes values in the space of k × k matrices. Our derivations are based on the concept of monotone rearrangement (quantile function) as well as on matrix analysis arguments stemming from the theory of generalized locally Toeplitz sequences and the observation that any finite multiset of numbers Λ n = { λ 1 , n , , λ d n , n } can always be interpreted as the spectrum of a matrix A n = diag ( λ 1 , n , , λ d n , n ) . The theoretical results are illustrated through numerical experiments, and a reinterpretation of them in terms of vague convergence of probability measures is hinted.

2010 MSC: 47B06; 15A18; 60B10; 15B05

1 Introduction

Throughout this paper, a matrix-sequence is a sequence of the form { A n } n , where A n is a square matrix such that size(A n ) = d n → ∞ as n → ∞. Matrix-sequences arise in several contexts. For example, when a linear differential equation is discretized by a linear numerical method, such as the finite difference method, the finite element method, the isogeometric analysis, etc., the actual computation of the numerical solution reduces to solving a linear system A n u n = f n . The size d n of this system diverges to ∞ as the mesh-fineness parameter n tends to ∞, and we are therefore in the presence of a matrix-sequence { A n } n . It is often observed in practice that { A n } n belongs to the class of generalized locally Toeplitz (GLT) sequences and it therefore enjoys an asymptotic singular value and/or eigenvalue distribution. We refer the reader to the books [1], [2] and the papers [3], [4], [5], [6] for a comprehensive treatment of GLT sequences and to ref. [7] for a more concise introduction to the subject. Another noteworthy example of matrix-sequences concerns the finite sections of an infinite Toeplitz matrix. An infinite (block) Toeplitz matrix is a matrix of the form

(1.1) [ f i j ] i , j = 1 = f 0 f 1 f 2 f 1 f 0 f 1 f 2 f 2 f 1 f 0 f 1 f 2 f 2 f 1 f 2 ,

where the entries …, f −2, f −1, f 0, f 1, f 2, … are k × k matrices (blocks) for some k ⩾ 1. If k = 1, then (1.1) is a classical (scalar) Toeplitz matrix. The nth section of (1.1) is the matrix defined by

(1.2) A n = [ f i j ] i , j = 1 n .

A case of special interest arises when the entries f k are the Fourier coefficients of a function f : [ π , π ] C k × k with components f ij L 1([−π, π]), i.e.,

f k = 1 2 π π π f ( θ ) e i k θ d θ , k Z ,

where the integrals are computed componentwise. In this case, the matrix A n in (1.2) is denoted by T n (f) and is referred to as the nth (block) Toeplitz matrix generated by f. The asymptotic singular value and eigenvalue distributions of the Toeplitz sequence { T n ( f ) } n have been deeply investigated over time, starting from Szegő’s first limit theorem [8] and the Avram–Parter theorem [9], [10], up to the works by Tyrtyshnikov–Zamarashkin [11], [12], [13] and Tilli [14], [15]. For more on this subject, see [1], Ch. 6] and [16], Ch. 5].

An important concept related to matrix-sequences is the notion of asymptotic spectral (or eigenvalue) distribution. After the publication of Tyrtyshnikov’s paper in 1996 [11], there has been an ever growing interest in this topic, which led, among others, to the birth of GLT sequences [17], [18], [19]. The reasons behind this widespread interest are not purely academic, because the asymptotic spectral distribution has significant practical implications. For example, suppose that { A n } n is a matrix-sequence resulting from the discretization of a differential equation A u = f through a given numerical method. Then, the asymptotic spectral distribution of { A n } n can be used to measure the accuracy of the method in approximating the spectrum of the differential operator A [20], to establish whether the method preserves the so-called average spectral gap [21], or to formulate analytical predictions for the eigenvalues of both A n and A [22]. Moreover, the asymptotic spectral distribution of { A n } n can be exploited to design efficient iterative solvers for linear systems with matrix A n and to analyze/predict their performance; see [23], [24] for accurate convergence estimates of Krylov methods based on the asymptotic spectral distribution and [1], p. 3] for more details on this subject.

Before proceeding further, let us introduce the formal definition of asymptotic singular value and eigenvalue distribution. Let C c ( R ) (resp., C c ( C ) ) be the space of continuous complex-valued functions with bounded support defined on R (resp., C ). If A C m × m , the singular values and eigenvalues of A are denoted by σ 1(A), …, σ m (A) and λ 1(A), …, λ m (A), respectively. If λ 1(A), …, λ m (A) are real, their maximum and minimum are also denoted by λ max(A) and λ min(A). We denote by μ d the Lebesgue measure in R d . Throughout this paper, unless otherwise specified, “measurable” means “Lebesgue measurable” and “a.e.” means “almost everywhere (with respect to the Lebesgue measure)”. A matrix-valued function f : Ω R d C k × k is said to be measurable (resp., bounded, continuous, continuous a.e., in L p (Ω), etc.) if its components f i j : Ω C , i, j = 1, …, k, are measurable (resp., bounded, continuous, continuous a.e., in L p (Ω), etc.).

Definition 1.1

(asymptotic singular value and eigenvalue distribution of a matrix-sequence). Let { A n } n be a matrix-sequence with A n of size d n , and let f : Ω R d C k × k be measurable with 0 < μ d (Ω) < ∞.

  1. We say that { A n } n has an asymptotic eigenvalue (or spectral) distribution described by f if

    (1.3) lim n 1 d n i = 1 d n F ( λ i ( A n ) ) = 1 μ d ( Ω ) Ω i = 1 k F ( λ i ( f ( x ) ) ) k d x F C c ( C ) .

    In this case, f is called the eigenvalue (or spectral) symbol of { A n } n and we write { A n } n λ f .

  2. We say that { A n } n has an asymptotic singular value distribution described by f if

    (1.4) lim n 1 d n i = 1 d n F ( σ i ( A n ) ) = 1 μ d ( Ω ) Ω i = 1 k F ( σ i ( f ( x ) ) ) k d x F C c ( R ) .

    In this case, f is called the singular value symbol of { A n } n and we write { A n } n σ f .

We remark that Definition 1.1 is well-posed as the functions x i = 1 k F ( λ i ( f ( x ) ) ) and x i = 1 k F ( σ i ( f ( x ) ) ) appearing in (1.3) and (1.4) are measurable [5], Lem. 2.1]. Throughout this paper, whenever we write a relation such as { A n } n λ f or { A n } n σ f , it is understood that { A n } n and f are as in Definition 1.1, i.e., { A n } n is a matrix-sequence and f is a measurable function taking values in C k × k for some k and defined on a subset Ω of some R d with 0 < μ d (Ω) < ∞. Since any finite multiset of numbers can always be interpreted as the spectrum of a matrix, a byproduct of Definition 1.1 is the following definition.

Definition 1.2

(asymptotic distribution of a sequence of finite multisets of numbers). Let { Λ n = { λ 1 , n , , λ d n , n } } n be a sequence of finite multisets of numbers such that d n → ∞ as n → ∞, and let f be as in Definition 1.1. We say that { Λ n } n has an asymptotic distribution described by f, and we write { Λ n } n f , if { A n } n λ f , where A n is any matrix whose spectrum equals Λ n (e.g., A n = diag ( λ 1 , n , , λ d n , n ) ).

In the previous literature, it has often been claimed that the informal meaning behind the asymptotic spectral distribution (1.3) is the following [5], Rem. 2.9]: assuming that there exist k a.e. continuous functions λ 1 ( f ) , , λ k ( f ) : Ω C such that λ 1(f( x )), …, λ k (f( x )) are the eigenvalues of f( x ) for every x ∈ Ω, the eigenvalues of A n , except possibly for o(d n ) outliers, can be subdivided into k different subsets of approximately the same cardinality, and, for n large enough, the eigenvalues belonging to the ith subset are approximately equal to the samples of λ i (f) over a uniform grid in the domain Ω. For instance, if d = 1, d n = nk, and Ω = [a, b], then, assuming we have no outliers, the eigenvalues of A n are approximately equal to

λ i f a + j b a n , j = 1 , , n , i = 1 , , k ,

for n large enough; similarly, if d = 2, d n = n 2 k, and Ω = [a 1, b 1] × [a 2, b 2], then, assuming we have no outliers, the eigenvalues of A n are approximately equal to

λ i f a 1 + j 1 b 1 a 1 n , a 2 + j 2 b 2 a 2 n , j 1 , j 2 = 1 , , n , i = 1 , , k ,

for n large enough; and so on for d ⩾ 3. In the case where d = k = 1, Ω = [a, b] is a bounded interval, { λ 1 ( A n ) , , λ d n ( A n ) } f ( Ω ) for all n, and f is a real function satisfying suitable conditions, a precise mathematical formulation and proof of the previous informal meaning was given by Bogoya, Böttcher, Grudsky, and Maximenko first in the Toeplitz case A n = T n (f) [25], Th. 1.5] and then in the case of an arbitrary A n [26], Th. 1.3]. In a nutshell, they proved the uniform convergence to 0 of the difference λ (A n ) − f( x n ), i.e.,

(1.5) lim n f ( x n ) λ ( A n ) = 0 ,

where ‖ ⋅ ‖ is the usual ∞-norm of vectors, λ (A n ) is the properly sorted vector of eigenvalues of A n , f( x n ) is the vector of samples [ f ( x 1 , n ) , , f ( x d n , n ) ] , and x n = [ x 1 , n , , x d n , n ] is a uniform grid in Ω.

In this paper, using the concept of monotone rearrangement (quantile function) and matrix analysis arguments from the theory of GLT sequences, we provide deeper insights into the notion of asymptotic spectral distribution by presenting precise mathematical formulations and proofs of the informal meaning behind (1.3) that are more general than [26], Th. 1.3]. Our formulations are made in terms of uniform convergence to 0 of differences of vectors, in complete analogy with (1.5).

  1. In our first main result (Theorem 2.1), we extend [26], Th. 1.3] by formulating and proving the informal meaning behind (1.3) in the case where d ⩾ 1 and the domain Ω of the spectral symbol f is a Peano–Jordan measurable set (i.e., a bounded set with μ d (∂Ω) = 0). In Corollary 2.1, we prove (a slightly more general version of) [26], Th. 1.3] as a corollary of Theorem 2.1.

  2. In our second main result (Theorem 2.2), we prove for d = 1 that Theorem 2.1 can be strengthened in the case where the spectrum of A n is contained in the image of f for every n and f satisfies some mild assumptions. More precisely, in this case we prove that the eigenvalues of A n are exact samples of f over an asymptotically uniform grid (see Section 2 for the corresponding definition).

  3. In our last main result (Theorem 2.3), we extend [26], Th. 1.3] by formulating and proving the informal meaning behind (1.3) in the case where k ⩾ 1, i.e., the spectral symbol f is a matrix-valued function.

The results herein, including the main Theorems 2.12.3, are actually formulated in terms of asymptotic distributions of sequences of finite multisets of numbers (Definition 1.2) and not in terms of asymptotic spectral distributions (Definition 1.1). This is done to allow for a better comparison with the previous literature and especially with [26]. Reinterpreting the results in terms of asymptotic spectral distributions is a straightforward rephrasing exercise that is left to the reader.

The paper is organized as follows. In Section 2, we formulate the main results. In Section 3, we prove the main results. In Section 4, we illustrate the main results through numerical experiments. In Section 5, we draw conclusions and we also highlight the relation existing between the asymptotic distribution and the vague convergence of probability measures (a relation that allows for a reinterpretation of the main results of this paper in a probabilistic perspective).

2 Main results

2.1 Notation and terminology

Throughout this paper, the cardinality, the interior, the closure, and the characteristic (indicator) function of a set E are denoted by |E|, E , E ̄ , and χ E , respectively. We use “increasing” as a synonym of “non-decreasing”. We use “strictly increasing” whenever we want to specify that the increase is strict. Similarly, we use “decreasing” as a synonym of “non-increasing” and “strictly decreasing” whenever we want to specify that the decrease is strict. The word “monotone” means either “increasing” or “decreasing”, while “strictly monotone” means either “strictly increasing” or “strictly decreasing”. If z C and ɛ > 0, we denote by D(z, ɛ) the open disk with center z and radius ɛ, i.e., D ( z , ε ) = { w C : | w z | < ε } . If S C and ɛ > 0, we denote by S ɛ = ⋃ zS D(z, ɛ) the ɛ-expansion of S, i.e., the set of points whose distance from S is smaller than ɛ. We use a notation borrowed from probability theory to indicate sets. For example, if f , g : Ω R d R , then {f ⩽ 1} = { x ∈ Ω: f( x ) ⩽ 1}, μ d {f > 0, g < 0} is the measure of the set { x ∈ Ω: f( x ) > 0, g( x ) < 0}, etc.

2.1.1 Multi-index notation

A multi-index i of size d, also called a d-index, is a vector in Z d . 0 and 1 are the vectors of all zeros and all ones, respectively (their size will be clear from the context). For any vector n R d , we set N ( n ) = j = 1 d n j and we write n → ∞ to indicate that min( n ) → ∞. If h , k R d , an inequality such as h k means that h j k j for all j = 1, …, d. If h , k are d-indices such that h k , the d-index range { h , …, k } is the set { i Z d : h i k } . We assume for this set the standard lexicographic ordering:

[ ( i 1 , , i d ) ] i d = h d , , k d i d 1 = h d 1 , , k d 1 i 1 = h 1 , , k 1 .

For instance, in the case d = 2 the ordering is

( h 1 , h 2 ) , ( h 1 , h 2 + 1 ) , , ( h 1 , k 2 ) , ( h 1 + 1 , h 2 ) , ( h 1 + 1 , h 2 + 1 ) , , ( h 1 + 1 , k 2 ) , , ( k 1 , h 2 ) , ( k 1 , h 2 + 1 ) , , ( k 1 , k 2 ) .

When a d-index i varies in a finite set I Z d (this is simply written as i I ), it is always understood that i follows the lexicographic ordering. For instance, if we write x = [ x i ] i I , then x is a vector of size | I | whose components are indexed by the d-index i varying in I according to the lexicographic ordering. Similarly, if we write X = [ x i j ] i , j I , then X is a square matrix of size | I | whose components are indexed by a pair of d-indices i , j , both varying in I according to the lexicographic ordering. When I is a d-index range { h , …, k }, the notation i I is often replaced by i = h , …, k . Operations involving d-indices (or general vectors with d components) that have no meaning in the vector space R d must always be interpreted in the componentwise sense. For instance, jh = (j 1 h 1, …, j d h d ), i / j = (i 1/j 1, …, i d /j d ), etc. If a , b R d and a b , we denote by [ a , b ] the closed d-dimensional rectangle given by [a 1, b 1] ×⋯× [a d , b d ].

2.1.2 Essential range

Given a measurable function f : Ω R d C , the essential range of f is denoted by E R ( f ) . We recall that E R ( f ) is defined as

E R ( f ) = { z C : μ d { f D ( z , ε ) } > 0  for all  ε > 0 } .

It is clear that E R ( f ) f ( Ω ) ̄ . Moreover, E R ( f ) is closed and f E R ( f ) a.e.;[1] see, e.g., [1], Lem. 2.1]. If f is real then E R ( f ) is a subset of R . In this case, we define the essential infimum (resp., supremum) of f on Ω as the infimum (resp., supremum) of E R ( f ) :

e s s inf Ω f = inf E R ( f ) , e s s sup Ω f = sup E R ( f ) .

2.1.3 Asymptotically uniform grids

Let [ a , b ] be a closed d-dimensional rectangle and let { n = n (n)} n be a sequence of d-indices in N d such that n → ∞ as n → ∞. For every n, let G n ( n ) = { x i , n ( n ) } i = 1 , , n be a sequence of N( n ) points in R d . We say that G n ( n ) is an asymptotically uniform (a.u.) grid in [ a , b ] if

lim n m ( G n ( n ) ) = 0 ,

where

m ( G n ( n ) ) = max i = 1 , , n x i , n ( n ) a + i b a n

is referred to as the distance of G n ( n ) from the uniform grid { a + i ( b a )/ n } i =1,…, n . Note that G n ( n ) needs not to be contained in [ a , b ] in order to be an a.u. grid in [ a , b ]. Note also that the notation n is used instead of n (n) for simplicity, but it is understood that n = n (n) depends on n.

2.1.4 Regular sets

We say that Ω R d is a regular set if it is bounded and μ d (∂Ω) = 0. Note that the condition “μ d (∂Ω) = 0” is equivalent to “χ Ω is continuous a.e. on R d ”. Any regular set Ω R d is measurable and we have μ d ( Ω ) = μ d ( Ω ) = μ d ( Ω ̄ ) < . In Riemann integration theory, a regular set is simply a Peano–Jordan measurable set.

2.2 Statements of the main results

Theorem 2.1 is our first main result. It is a generalization to the multidimensional case of a previous result due to Bogoya, Böttcher, Grudsky, and Maximenko [26], Th. 1.3]. For a better comparison between Theorem 2.1 and [26], Th. 1.3], we recall that, if f : Ω R d R is a function defined on a regular set Ω, then the condition “f is bounded and continuous a.e.” is equivalent to “f is Riemann-integrable”.

Theorem 2.1.

Let f : Ω R d R be bounded and continuous a.e. on the regular set Ω with μ d (Ω) > 0 and E R ( f ) = [ inf Ω f , sup Ω f ] . Take any d-dimensional rectangle [ a , b ] containing Ω and any a.u. grid G n ( n ) = { x i , n ( n ) } i = 1 , , n in [ a , b ], where n = n ( n ) N d and n → ∞ as n → ∞. For every n, define I n ( Ω ) = { i { 1 , , n } : x i , n ( n ) Ω } and consider the multiset of samples { f ( x i , n ( n ) ) : i I n ( Ω ) } = { f 1 , n , , f | I n ( Ω ) | , n } and a multiset of | I n ( Ω ) | real numbers Λ n = { λ 1 , n , , λ | I n ( Ω ) | , n } with the following properties:

  1. { Λ n } n f ;

  2. Λ n ⊆ [infΩ fɛ n , supΩ f + ɛ n ] for every n and for some ɛ n → 0 as n → ∞.

Then, if σ n and τ n are two permutations of { 1 , , | I n ( Ω ) | } such that the vectors [ f σ n ( 1 ) , n , , f σ n ( | I n ( Ω ) | ) , n ] and [ λ τ n ( 1 ) , n , , λ τ n ( | I n ( Ω ) | ) , n ] are sorted in increasing order, we have

max i = 1 , , | I n ( Ω ) | | f σ n ( i ) , n λ τ n ( i ) , n | 0 as n .

In particular,

min τ max i = 1 , , | I n ( Ω ) | | f i , n λ τ ( i ) , n | 0 as n ,

where the minimum is taken over all permutations τ of { 1 , , | I n ( Ω ) | } .

Remark 2.1.

Since E R ( f ) is closed, the hypothesis E R ( f ) = [ inf Ω f , sup Ω f ] in Theorem 2.1 is equivalent to asking that E R ( f ) is connected with infΩ f = ess infΩ f and supΩ f = ess supΩ f. Note that the condition infΩ f = ess infΩ f is equivalent to infΩ f ⩾ ess infΩ f (the other inequality infΩ f ⩽ ess infΩ f is always satisfied because E R ( f ) f ( Ω ) ̄ ). Similarly, the condition supΩ f = ess supΩ f is equivalent to supΩ f ⩽ ess supΩ f. Note also that the hypothesis E R ( f ) = [ inf Ω f , sup Ω f ] implies f ( Ω ) ̄ = E R ( f ) .

As a consequence of Theorem 2.1, in Corollary 2.1 we prove (a slightly more general version of) [26], Th. 1.3].

Corollary 2.1.

Let f : [ a , b ] R be bounded and continuous a.e. with E R ( f ) = [ inf [ a , b ] f , sup [ a , b ] f ] . Let { Λ n = { λ 1 , n , , λ d n , n } } n be a sequence of finite multisets of real numbers such that d n → ∞ as n → ∞. Assume the following:

  1. { Λ n } n f ;

  2. Λ n ⊆ [inf[a,b] fɛ n , sup[a,b] f + ɛ n ] for every n and for some ɛ n → 0 as n → ∞.

Then, for every a.u. grid { x i , n } i = 1 , , d n in [a, b] with { x i , n } i = 1 , , d n [ a , b ] , if σ n and τ n are two permutations of {1, …, d n } such that the vectors [ f ( x σ n ( 1 ) , n ) , , f ( x σ n ( d n ) , n ) ] and [ λ τ n ( 1 ) , n , , λ τ n ( d n ) , n ] are sorted in increasing order, we have

max i = 1 , , d n | f ( x σ n ( i ) , n ) λ τ n ( i ) , n | 0 as n .

In particular,

min τ max i = 1 , , d n | f ( x i , n ) λ τ ( i ) , n | 0 as n ,

where the minimum is taken over all permutations τ of {1, …, d n }.

Proof.

Take an a.u. grid { x i , n } i = 1 , , d n in [a, b] and two permutations σ n and τ n as specified in the statement. Since { x i , n } i = 1 , , d n [ a , b ] by assumption, for every n we have I n ([a, b]) = {i ∈ {1, …, d n }: x i,n ∈ [a, b]} = {1, …, d n }. To conclude that max i = 1 , , d n | f ( x σ n ( i ) , n ) λ τ n ( i ) , n | 0 as n → ∞, apply Theorem 2.1 to the function f with the a.u. grid { x i , n } i = 1 , , d n and the multiset of real numbers Λ n = { λ 1 , n , , λ d n , n } . □

Remark 2.2.

The hypothesis “ E R ( f ) = [ inf [ a , b ] f , sup [ a , b ] f ] ” in Corollary 2.1 is replaced in ref. [26], Th. 1.3] by the weaker version “ E R ( f ) is connected”. However, the latter is not enough to get the thesis, as shown by the following counterexample. Let f ( x ) = χ { 1 } ( x ) : [ 0,1 ] R be the characteristic function of {1}. Note that f is bounded and continuous a.e. with E R ( f ) = { 0 } connected. Take d n = n and λ 1,n = … = λ n,n = 0 for all n. All the hypotheses of Corollary 2.1 are satisfied except for the assumption “ E R ( f ) = [ inf [ 0,1 ] f , sup [ 0,1 ] f ] ”, and the thesis does not hold. Indeed, if we take the a.u. grid in [0, 1] given by { x i , n = i / n } i = 1 , , n , then the samples f(x 1,n ), …, f(x n,n ) are sorted in increasing order just as the numbers λ 1,n , …, λ n,n , and we have max i=1,…,n |f(x i,n ) − λ i,n | = 1, which does not tend to 0 as n → ∞. This same counterexample shows that the hypothesis “ E R ( f ) is connected” in ref. [26], Th. 1.3] must be replaced by the stronger version “ E R ( f ) = [ inf [ a , b ] f , sup [ a , b ] f ] ” as in Corollary 2.1, otherwise the result does not hold.[2]

Theorem 2.2 is our second main result. It shows that, if in Corollary 2.1 we assume that Λ n f([a, b]) and f has a finite number of local maximum, local minimum, and discontinuity points, then the values in Λ n , up to a suitable permutation, are exact samples of f on an a.u. grid in [a, b]. It is important to point out that by “local maximum/minimum point” we here mean “weak local maximum/minimum point” according to the following definition.

Definition 2.1

(local extremum points). Given a function f : [ a , b ] R and a point x 0 ∈ [a, b], we say that x 0 is a local maximum point (resp., local minimum point) for f if f(x 0) ⩾ f(x) (resp., f(x 0) ⩽ f(x)) for all x belonging to a neighborhood of x 0 in [a, b].

For example, if f is constant on [a, b], then all points of [a, b] are both local maximum and local minimum points for f.

Theorem 2.2.

Let f : [ a , b ] R be bounded with a finite number of local maximum points, local minimum points, and discontinuity points, and with E R ( f ) = [ inf [ a , b ] f , sup [ a , b ] f ] . Let { Λ n = { λ 1 , n , , λ d n , n } } n be a sequence of finite multisets of real numbers such that d n → ∞ as n → ∞. Assume the following:

  1. { Λ n } n f ;

  2. Λ n f([a, b]) for every n.

Then, there exist an a.u. grid { x i , n } i = 1 , , d n in [a, b] with { x i , n } i = 1 , , d n [ a , b ] and a permutation τ n of {1, …, d n } such that, for every n,

λ τ n ( i ) , n = f ( x i , n ) , i = 1 , , d n .

Theorem 2.3 is our last main result. It is a generalization of Corollary 2.1 to the case where the scalar function f is replaced by a matrix-valued function.

Theorem 2.3.

Let f 1 , , f k : [ a , b ] R be bounded and continuous a.e. with E R ( f 1 ) = [ inf [ a , b ] f 1 , sup [ a , b ] f 1 ] , , E R ( f k ) = [ inf [ a , b ] f k , sup [ a , b ] f k ] . Let { Λ n = { λ 1 , n , , λ d n , n } } n be a sequence of finite multisets of real numbers such that d n → ∞ as n → ∞. Assume the following:

  1. { Λ n } n f with f = diag(f 1, …, f k );

  2. For every n there exists a partition { Λ ̃ n , 1 , , Λ ̃ n , k } of Λ n such that, for every j = 1, …, k, | Λ ̃ n , j | / d n 1 / k as n → ∞ and Λ ̃ n , j [ inf [ a , b ] f j ε n , sup [ a , b ] f j + ε n ] for some ɛ n → 0 as n → ∞.

Then, for every n there exists a partition {Λ n,1, …, Λ n,k } of Λ n such that, for every j = 1, …, k, the following properties hold:

  1. | Λ n , j | = | Λ ̃ n , j | ;

  2. Λ n,j ⊆ [inf[a,b] f j δ n , sup[a,b] f j + δ n ] for some δ n → 0 as n → ∞;

  3. { Λ n , j } n f j ;

  4. Let Λ n , j = { λ 1 , n ( j ) , , λ | Λ n , j | , n ( j ) } . For every a.u. grid { x i , n ( j ) } i = 1 , , | Λ n , j | in [a, b] with { x i , n ( j ) } i = 1 , , | Λ n , j | [ a , b ] , if σ n,j and τ n,j are two permutations of {1, …, |Λ n,j |} such that the vectors [ f j ( x σ n , j ( 1 ) , n ( j ) ) , , f j ( x σ n , j ( | Λ n , j | ) , n ( j ) ) ] and [ λ τ n , j ( 1 ) , n ( j ) , , λ τ n , j ( | Λ n , j | ) , n ( j ) ] are sorted in increasing order, we have

    max i = 1 , , | Λ n , j | | f j ( x σ n , j ( i ) , n ( j ) ) λ τ n , j ( i ) , n ( j ) | 0 as n .

    In particular,

    min τ max i = 1 , , | Λ n , j | | f j ( x i , n ( j ) ) λ τ ( i ) , n ( j ) | 0 as n ,

    where the minimum is taken over all permutations τ of {1, …, |Λ n,j |}.

In order to prove Theorem 2.3, we shall need the following lemmas, which are reported here because they have a special interest in themselves and may be considered as further main results of this paper, although “less important” than Theorems 2.12.3.

Lemma 2.1.

Let f 1 , , f k : [ a , b ] R be measurable. Let { Λ n = { λ 1 , n , , λ d n , n } } n be a sequence of finite multisets of real numbers such that d n → ∞ as n → ∞. Assume the following:

  1. { Λ n } n f with f = diag(f 1, …, f k );

  2. { L n , 1 } n , , { L n , k } n are sequences of natural numbers such that L n,1 + … + L n,k = d n for every n and L n,j /d n → 1/k as n → ∞ for every j = 1, …, k.

Then, for every n there exists a partition {Λ n,1, …, Λ n,k } of Λ n such that, for every j = 1, …, k, the following properties hold:

  1. n,j | = L n,j ;

  2. { Λ n , j } n f j .

Lemma 2.2.

Let f 1 , , f k : [ a , b ] R be measurable. Let { Λ n = { λ 1 , n , , λ d n , n } } n be a sequence of finite multisets of real numbers such that d n → ∞ as n → ∞. Assume the following:

  1. { Λ n } n f with f = diag(f 1, …, f k );

  2. for every n there exists a partition { Λ ̃ n , 1 , , Λ ̃ n , k } of Λ n such that, for every j = 1, …, k, | Λ ̃ n , j | / d n 1 / k as n → ∞ and Λ ̃ n , j ( E R ( f j ) ) ε n for some ɛ n → 0 as n → ∞.

Then, for every n there exists a partition {Λ n,1, …, Λ n,k } of Λ n such that, for every j = 1, …, k, the following properties hold:

  1. | Λ n , j | = | Λ ̃ n , j | ;

  2. { Λ n , j } n f j ;

  3. Λ n , j ( E R ( f j ) ) δ n for some δ n → 0 as n → ∞.

3 Proofs of the main results

3.1 Monotone rearrangement

In this section, we recall the notion of monotone rearrangement and we collect some related results that we shall need in the proof of Theorem 2.1.

Definition 3.1.

Let f : Ω R d R be measurable on a set Ω with 0 < μ d (Ω) < ∞. The monotone rearrangement of f is the function denoted by f and defined as follows:

(3.1) f ( y ) = inf u R : μ d { f u } μ d ( Ω ) y , y ( 0,1 ) .

Note that f (y) is a well-defined real number for every y ∈ (0, 1), because

lim u + μ d { f u } = μ d ( Ω ) , lim u μ d { f u } = 0 .

In probability theory, where f is interpreted as a random variable on Ω with probability distribution m f and distribution function F f given by

m f ( A ) = P { f A } = μ d { f A } μ d ( Ω ) , A R  is a Borel set , F f ( u ) = P { f u } = μ d { f u } μ d ( Ω ) , u R ,

the monotone rearrangement f in (3.1) can be rewritten as

f ( y ) = inf u R : F f ( u ) y , y ( 0,1 ) ,

and is referred to as the quantile function of f or the generalized inverse of F f . It is clear that f is monotone increasing on (0, 1), which implies that the limits lim y→0 f (y) and lim y→1 f (y) always exist. The next lemma gives the exact values of these limits and allows us to complete the definition of f by continuous extension; see Definition 3.2.

Lemma 3.1.

Let f : Ω R d R be measurable on a set Ω with 0 < μ d (Ω) < ∞. Then,

lim y 0 f ( y ) = e s s inf Ω f , lim y 1 f ( y ) = e s s sup Ω f .

Proof.

We only prove the equality lim y→0 f (y) = ess infΩ f as the proof of the other equality is analogous.

Case 1: ess infΩ f = −∞. In this case, by definition of ess infΩ f, we have μ d {fu} > 0 for all u R , i.e., F f (u) > 0 for all u R . Hence, for every u 0 R there exists y 0 = F f (u 0)/2 ∈ (0, 1) such that, for 0 < yy 0,

f ( y ) f ( y 0 ) = inf { u R : F f ( u ) y 0 } u 0 .

This means that lim y→0 f (y) = −∞.

Case 2: e s s inf Ω f = m R . In this case, since f E R ( f ) a.e., we have μ d {fu} = 0 for all u < m, i.e., F f (u) = 0 for all u < m. Thus, for every y ∈ (0, 1),

f ( y ) = inf { u R : F f ( u ) y } m lim y 0 f ( y ) m .

To prove the other inequality, fix ɛ > 0. By definition of m, we have

μ d { f m + ε } > 0 F f ( m + ε ) = μ d { f m + ε } μ d ( Ω ) = α ε ( 0,1 ] .

Hence, α ɛ /2 ∈ (0, 1) and

f ( α ε / 2 ) = inf { u R : F f ( u ) α ε / 2 } m + ε lim y 0 f ( y ) m + ε .

Since this is true for all ɛ > 0, we conclude that lim y→0 f (y) ⩽ m. □

Definition 3.2

(monotone rearrangement). Let f : Ω R d R be measurable on a set Ω with 0 < μ d (Ω) < ∞. The monotone rearrangement of f is the function denoted by f and defined as follows:

f ( y ) = inf u R : μ d { f u } μ d ( Ω ) y , if y ( 0,1 ) , f ( 0 ) = e s s inf Ω f , if e s s inf Ω f > , f ( 1 ) = e s s sup Ω f , if e s s sup Ω f < .

The domain of f is denoted by Ω and is always assumed to be the largest possible. This means that Ω always includes (0, 1) and it also includes 0 (resp., 1) whenever f (0) (resp., f (1)) is defined, i.e., whenever ess infΩ f > −∞ (resp., ess supΩ f < ∞).

We remark that, according to Definition 3.2 and Lemma 3.1, f is always continuous at 0 and 1 whenever it is defined there. In particular, the discontinuity points of f , if any, must lie in (0, 1). The next lemma collects some basic properties of monotone rearrangements.

Lemma 3.2.

Let f : Ω R d R be measurable on a set Ω with 0 < μ d (Ω) < ∞.

  1. f is monotone increasing and left-continuous on Ω.

  2. For every Borel set A R , we have

    μ 1 { f A } = μ d { f A } μ d ( Ω ) .

  3. For every continuous bounded function F : R R , we have

    1 μ d ( Ω ) Ω F ( f ( x ) ) d x = 0 1 F ( f ( y ) ) d y .

  4. E R ( f ) = E R ( f ) .

  5. For every y ∈ Ω we have f ( y ) E R ( f ) .

  6. If f is continuous on (0, 1) then E R ( f ) = f ( Ω ) .

  7. f is continuous on (0, 1) if and only if E R ( f ) is connected.

Proof.

1–3. These properties can be derived from [27], Ch. 3, Prop. 4 and Prob. 3], [27], Ch. 4, Th. 15] and [27], Ch. 14, Prop. 7].

4. By definition of E R ( f ) and property 2,

E R ( f ) = { x R : μ d { f D ( x , ε ) } > 0   for all  ε > 0 } = x R : μ d { f D ( x , ε ) } > 0   for all  ε > 0 = E R ( f ) .

5. Let y ∈ Ω. f is left-continuous at y (by property 1), and it is continuous at y if y = 0 or y = 1 (because f is always continuous at 0 and 1 whenever it is defined there). Thus, we have μ 1{f D(f (y), ɛ)} > 0 for every ɛ > 0. Hence, y E R ( f ) .

6. Suppose that f is continuous on (0, 1). This means that f is continuous on Ω, since f is always continuous at 0 and 1 whenever it is defined there. By property 5, we infer that f ( Ω ) E R ( f ) . On the other hand, E R ( f ) f ( Ω ) ̄ = f ( Ω ) , where the latter equality follows from the fact that f ) is closed by definition of Ω and the continuity and monotonicity of f . Thus, E R ( f ) = f ( Ω ) .

7. (⟹) Suppose that f is continuous on (0, 1), i.e., continuous on Ω. By properties 4 and 6, we have

E R ( f ) = E R ( f ) = f ( Ω ) .

In particular, E R ( f ) is connected because it is equal to the image of a connected set Ω through a continuous function f ; see [28], Th. 4.22].

(⟸) Suppose that f is not continuous on (0, 1). Then, there exists for f a discontinuity point y 0 ∈ (0, 1), which is necessarily a jump because f is monotone increasing. In particular, we can find a point α in the open jump ( f ( y 0 ) , f + ( y 0 ) ) , where f ( y 0 ) and f + ( y 0 ) are the left and right limits of f in y 0 (note that f ( y 0 ) = f ( y 0 ) by the left-continuity of f ; see property 1). Obviously, α E R ( f ) . We show that α disconnects E R ( f ) . Take two points u, v ∈ (0, 1) such that u < y 0 < v. By property 5, we have f ( u ) , f ( v ) E R ( f ) . Moreover, by monotonicity, f (u) < α < f (v). Hence, E R ( f ) = E R ( f ) is not connected. □

The main results we need on monotone rearrangements are Theorems 3.1 and 3.2. For the proof of Theorem 3.1, see [29], Th. 3.1]. Theorem 3.2 is a generalization of ref. [29], Th. 3.2] and is proved below.

Theorem 3.1.

Let f : Ω R d R be continuous a.e. on the regular set Ω with μ d (Ω) > 0. Take any d-dimensional rectangle [ a , b ] containing Ω and any a.u. grid G n ( n ) = { x i , n ( n ) } i = 1 , , n in [ a , b ], where n = n ( n ) N d and n → ∞ as n → ∞. For every n, consider the samples

f ( x i , n ( n ) ) , i I n ( Ω ) = i { 1 , , n } : x i , n ( n ) Ω ,

sort them in increasing order, and put them into a vector [s 0, …, s ω(n)], where ω ( n ) = | I n ( Ω ) | 1 . Let f n : [ 0,1 ] R be the linear spline function interpolating the samples [s 0, …, s ω(n)] over the equally spaced nodes [0, 1/ω(n), 2/ω(n), …, 1] in [0, 1]. Then,

lim n f n ( y ) = f ( y )

for every continuity point y of f in (0, 1).

Theorem 3.2.

In Theorem 3.1, suppose that E R ( f ) is connected. Then, the following properties hold.

  1. f n f uniformly on every compact interval [α, β] ⊂ (0, 1).

  2. If f is bounded from below on Ω with infΩ f = ess infΩ f, then f n f uniformly on every compact interval [0, α] ⊂ [0, 1).

  3. If f is bounded from above on Ω with supΩ f = ess supΩ f, then f n f uniformly on every compact interval [α, 1] ⊂ (0, 1].

  4. If f is bounded on Ω and E R ( f ) = [ inf Ω f , sup Ω f ] , then f n f uniformly on [0, 1].

Recall from Remark 2.1 that the assumption infΩ f = ess infΩ f in the second statement is equivalent to infΩ f ⩾ ess infΩ f and the assumption supΩ f = ess supΩ f in the third statement is equivalent to supΩ f ⩽ ess supΩ f. Moreover, under the hypothesis that E R ( f ) is connected and bounded, the assumption E R ( f ) = [ inf Ω f , sup Ω f ] in the fourth statement is equivalent to asking that infΩ f = ess infΩ f and supΩ f = ess supΩ f, which in turn is equivalent to asking that f ( Ω ) E R ( f ) , i.e., f ( Ω ) ̄ = E R ( f ) . The proof of Theorem 3.2 relies on the following lemma, which is sometimes referred to as Dini’s second theorem [30], pp. 81 and 270, Prob. 127].

Lemma 3.3.

If a sequence of monotone functions converges pointwise on a compact interval to a continuous function, then it converges uniformly.

Proof of Theorem 3.2.

  1. f is continuous on (0, 1) by Lemma 3.2 and f n f everywhere in (0, 1) by Theorem 3.1. Since the functions f n are continuous and increasing on (0, 1), the thesis follows from Lemma 3.3.

  2. Since f is bounded from below on Ω, we have ess infΩ f > −∞ and f (0) = lim y→0 f (y) = ess infΩ f is defined. Moreover, the function f is continuous on [0, 1) by Lemma 3.2 and the definition f (0) = lim y→0 f (y). Since f n ( 0 ) = s 0 is the evaluation of f at a point of Ω and infΩ f = ess infΩ f by assumption, for every n we have

    f n ( 0 ) inf Ω f = e s s inf Ω f = f ( 0 ) .

    Since f n f everywhere in (0, 1) by Theorem 3.1 and the functions f n , f are continuous and increasing on [0, 1), for every ɛ > 0 we have

    f ( 0 ) lim inf n f n ( 0 ) lim sup n f n ( 0 ) lim sup n f n ( ε ) = f ( ε ) ,

    hence

    f ( 0 ) = lim n f n ( 0 ) .

    We have therefore proved that f n f everywhere in [0, 1), and the thesis now follows from Lemma 3.3.

  3. It is proved in the same way as the second statement.

  4. It follows immediately from the second and third statements. □

3.2 Proof of Theorem 2.1

The last result we need to prove Theorem 2.1 is the following technical lemma [29], Lem. 3.3].

Lemma 3.4.

Let ω(n) be a sequence of positive integers such that ω(n) → ∞ and let g n : [ 0,1 ] R be a sequence of increasing functions such that

lim n 1 ω ( n ) = 0 ω ( n ) F g n ω ( n ) = 0 1 F ( g ( y ) ) d y F C c ( R ) ,

where g : ( 0,1 ) R is increasing. Then, g n (y) → g(y) for every continuity point y of g in (0, 1).

Proof of Theorem 2.1.

We begin with the following observation: since E R ( f ) is connected and bounded by assumption, the domain of f is Ω = [0, 1] and f is continuous on [0, 1]; see Lemma 3.2 (property 7) and Definition 3.2.

Sort the samples { f ( x i , n ( n ) ) : i I n ( Ω ) } = { f 1 , n , , f | I n ( Ω ) | , n } in increasing order through a permutation σ n as in the statement of the theorem, and put them into a vector [s 0, …, s ω(n)], where ω ( n ) = | I n ( Ω ) | 1 . Note that [ s 0 , , s ω ( n ) ] = [ f σ n ( 1 ) , n , , f σ n ( | I n ( Ω ) | ) , n ] . Let f n : [ 0,1 ] R be the linear spline function interpolating the samples [s 0, …, s ω(n)] over the equally spaced nodes [0, 1/ω(n), 2/ω(n), …, 1] in [0, 1]. By Theorem 3.2,

(3.2) f n f  uniformly on  [ 0, 1 ]  as n .

Sort the real numbers Λ n = { λ 1 , n , , λ | I n ( Ω ) | , n } in increasing order through a permutation τ n as in the statement of the theorem, and put them into a vector [t 0, …, t ω(n)]. Note that [ t 0 , , t ω ( n ) ] = [ λ τ n ( 1 ) , n , , λ τ n ( | I n ( Ω ) | ) , n ] . Let g n : [ 0,1 ] R be the linear spline function interpolating the samples [t 0, …, t ω(n)] over the equally spaced nodes [0, 1/ω(n), 2/ω(n), …, 1] in [0, 1]. Since { Λ n } n f by the assumption { Λ n } n f and Lemma 3.2 (property 3), the hypotheses of Lemma 3.4 are satisfied with g = f . Since f is continuous on [0, 1], we infer that g n (y) → f (y) for every y ∈ (0, 1). Moreover, g n (y) → f (y) also for y = 0, 1. Indeed, for every n we have

g n ( 0 ) f ( 0 ) ε n ,

because on the one hand f (0) = ess infΩ f = infΩ f by our assumption E R ( f ) = [ inf Ω f , sup Ω f ] , and on the other hand Λ n ⊆ [infΩ fɛ n , supΩ f + ɛ n ] by hypothesis. Thus, for every ɛ > 0 we have

f ( 0 ) lim inf n g n ( 0 ) lim sup n g n ( 0 ) lim sup n g n ( ε ) = f ( ε ) ,

and so, by the continuity of f at 0,

lim n g n ( 0 ) = f ( 0 ) .

The same argument shows that g n (1) → f (1) as n → ∞. We conclude that g n (y) → f (y) for all y ∈ [0, 1] and so, by Lemma 3.3,

(3.3) g n f  uniformly on  [ 0, 1 ]  as n .

By combining (3.2) and (3.3), we obtain that f n g n , [ 0,1 ] 0 as n → ∞. In particular,

max i = 0 , , ω ( n ) f n i ω ( n ) g n i ω ( n ) = max i = 0 , , ω ( n ) | s i t i | = max i = 1 , , | I n ( Ω ) | | f σ n ( i ) , n λ τ n ( i ) , n | 0 as n ,

which proves the theorem. □

3.3 Properties of a.u. grids

In this section, we collect the properties of a.u. grids that we need in the proof of Theorem 2.2. We begin with the following general result on real vectors. If A C m × m , we denote by ‖A‖ = max(σ 1(A), …, σ m (A)) the spectral (or Euclidean) norm of A.

Lemma 3.5.

Let x , y R m and let σ, τ be permutations of {1, …, m} that sort the components of x , y in increasing order, i.e., x σ(1) ⩽ … ⩽ x σ(m) and y τ(1) ⩽ … ⩽ y τ(m). Then,

(3.4) max i = 1 , , m | x σ ( i ) y τ ( i ) | max i = 1 , , m | x i y i | .

Proof.

By Weyl’s perturbation theorem [31], Cor. III.2.6], for every pair of m × m Hermitian matrices A and B we have

max i = 1 , , m | λ i ( A ) λ i ( B ) | A B ,

where the eigenvalues of A and B are arranged in increasing order: λ 1(A) ⩽ … ⩽ λ m (A) and λ 1(B) ⩽ … ⩽ λ m (B). If we apply this result to the real diagonal matrices A = diag(x 1, …, x m ) and B = diag(y 1, …, y m ), we obtain (3.4). □

As a consequence of Lemma 3.5, the increasing rearrangement of an a.u. grid is still an a.u. grid.

Lemma 3.6

(increasing rearrangement of an a.u. grid is still an a.u. grid). Let G n = { x i , n } i = 1 , , d n be an a.u. grid in [a, b] with d n → ∞, and let τ n be a permutation of {1, …, d n } such that x τ n ( 1 ) , n x τ n ( d n ) , n . Then G n = { x τ n ( i ) , n } i = 1 , , d n is an a.u. grid in [a, b] with m G n m ( G n ) for all n.

Proof.

We apply Lemma 3.5 with x = [ x i , n ] i = 1 d n and y = [ a + i ( b a ) / d n ] i = 1 d n , and we obtain

m G n = max i = 1 , , d n x τ n ( i ) , n a + i b a d n max i = 1 , , d n x i , n a + i b a d n = m ( G n ) .

In view of the next lemma, we point out that if G n = { x i , n } i = 1 , , d n is an a.u. grid in [a, b] then G n is a sequence of points for each fixed n. Moreover, every sequence of points G n = { x i , n } i = 1 , , d n can also be interpreted as a multiset consisting of d n elements. In particular, if G n = { x i , n } i = 1 , , d n and G n = { x i , n } i = 1 , , d n are two sequences of points (and hence two multisets), the intersection G n G n , the union G n G n , the difference G n \ G n , the symmetric difference G n G n = G n \ G n G n \ G n , etc., are well-defined multisets. Throughout this paper, if { a n } n and { b n } n are any two sequences such that a n , b n ≠ 0 for all n, we write a n b n as n → ∞ to indicate that a n /b n → 1 as n → ∞.

Lemma 3.7

(multiset differing little from an a.u. grid is an a.u. grid). Let G n = { x i , n } i = 1 , , d n be an a.u. grid in [a, b] with d n → ∞, and let G n = { x i , n } i = 1 , , d n be a sequence of d n real points such that G n [ a ε n , b + ε n ] for some ɛ n → 0 and | G n G n | = o ( d n ) as n → ∞. Then, d n d n as n → ∞ and G n is an a.u. grid in [a, b] provided that we rearrange its points in increasing order.

Proof.

Without loss of generality, we can assume that the points of G n and G n are arranged in increasing order:

x 1 , n x d n , n , x 1 , n x d n , n .

Indeed, after rearranging the points of G n and G n in increasing order (if necessary), the assumptions of the lemma are still satisfied, because G n remains an a.u. grid in [a, b] by Lemma 3.6 and G n G n does not change. To simplify the notation, we prove the lemma in the case [a, b] = [0, 1]; up to obvious modifications, the proof for a general interval [a, b] is the same as the proof for the interval [0, 1]. Let a n = | G n G n | and b n = | G n G n | = o ( d n ) . Let I n = { i 1 , , i a n } and J n = { j 1 , , j a n } be two sets of indices such that

{ x i 1 , n , , x i a n , n } = G n G n , 1 i 1 < < i a n d n , { x j 1 , n , , x j a n , n } = G n G n , 1 j 1 < < j a n d n ;

see Figure 1. We make the following observations.

Figure 1: 
Illustration for the proof of Lemma 3.7. The dots on the first line are the points of 




G


n




${\mathcal{G}}_{n}$



 and the dots on the second line are the points of 




G


n


′




${\mathcal{G}}_{n}^{\prime }$



. The green dots are the points of 




G


n


∩


G


n


′




${\mathcal{G}}_{n}\cap {\mathcal{G}}_{n}^{\prime }$



 while the red dots are the points of 




G


n


△


G


n


′




${\mathcal{G}}_{n}{\triangle}{\mathcal{G}}_{n}^{\prime }$



. In this case, we have d

n
 = 12, 




d


n


′


=
14


${d}_{n}^{\prime }=14$



, a

n
 = 5, b

n
 = 16, I

n
 = {3, 4, 8, 11, 12}, J

n
 = {4, 5, 10, 13, 14}.
Figure 1:

Illustration for the proof of Lemma 3.7. The dots on the first line are the points of G n and the dots on the second line are the points of G n . The green dots are the points of G n G n while the red dots are the points of G n G n . In this case, we have d n = 12, d n = 14 , a n = 5, b n = 16, I n = {3, 4, 8, 11, 12}, J n = {4, 5, 10, 13, 14}.

  1. x i k , n = x j k , n for all k = 1, …, a n . Indeed, x i 1 , n , , x i a n , n and x j 1 , n , , x j a n , n are the points of the same multiset G n G n arranged in increasing order.

  2. | d n d n | b n . Indeed, if c n = | G n \ G n | and e n = | G n \ G n | , then

    d n c n = | G n | | G n \ G n | = | G n G n | = a n , d n e n = | G n | | G n \ G n | = | G n G n | = a n , c n + e n = | G n \ G n | + | G n \ G n | = | G n G n | = b n ,

    and

    | d n d n | = | d n c n + c n e n + e n d n | = | a n + c n e n a n | = | c n e n | c n + e n = b n .

  3. |i k j k | ⩽ b n for all k = 1, …, a n . Indeed, if c n , k = | G n , k \ G n | with G n , k = { x 1 , n , , x i k , n } G n and e n , k = | G n , k \ G n | with G n , k = { x 1 , n , , x j k , n } G n , then[3]

    i k c n , k = | G n , k | | G n , k \ G n | = | G n , k G n | = | G n , k G n G n | = | { x i 1 , n , , x i k , n } | = k , j k e n , k = | G n , k | | G n , k \ G n | = | G n , k G n | = | G n , k G n G n | = | { x j 1 , n , , x j k , n } | = k , c n , k + e n , k = | G n , k \ G n | + | G n , k \ G n | | G n \ G n | + | G n \ G n | = c n + e n = b n ,

    and

    | i k j k | = | i k c n , k + c n , k e n , k + e n , k j k | = | k + c n , k e n , k k | = | c n , k e n , k | c n , k + e n , k b n .

From | d n d n | b n and the assumption b n = o(d n ), we immediately obtain that

d n d n 1 = d n d n d n b n d n 0 as n d n d n as n .

It remains to prove that G n is a.u. in [0, 1], i.e.,

(3.5) lim n m G n = 0 , m G n = max j = 1 , , d n x j , n j d n .

Recall that, by assumption, G n is a.u. in [0, 1], i.e.,

lim n m ( G n ) = 0 , m ( G n ) = max i = 1 , , d n x i , n i d n .

We consider two cases.

  1. jJ n . In this case, j = j k for some k = 1, …, a n and x j , n = x j k , n = x i k , n . Thus,

    (3.6) x j , n j d n = x i k , n j k d n x i k , n i k d n + i k d n i k d n + i k d n j k d n = x i k , n i k d n + i k | d n d n | d n d n + | i k j k | d n m ( G n ) + 2 b n d n .

  2. jJ n . In this case, let j k , j k+1J n be the two indices in J n surrounding j as in Figure 2. Note that j k may not exist if j is “too close” to the left boundary (as it would happen in Figure 2 if j were equal to 1, 2 or 3); in this situation, we have j k+1 = j 1 and we set by convention j k = j 0 = 0, i k = i 0 = 0, x 0 , n = x 0 , n = 0 . Similarly, if j k+1 does not exist (this may happen if j is “too close” to the right boundary), then j k = j a n and we set by convention j k + 1 = j a n + 1 = d n + 1 , i k + 1 = i a n + 1 = d n + 1 , x d n + 1 , n = x d n + 1 , n = 1 . With these definitions and conventions, we have the following.

    1. j k < j < j k+1.

    2. x i k , n = x j k , n x j k + 1 , n = x i k + 1 , n .

    3. x j k , n ε n x j , n x j k + 1 , n + ε n (recall that G n [ ε n , 1 + ε n ] by assumption).

    4. |jj k | ⩽ b n . Indeed, looking at Figure 2 and keeping in mind our conventions for the left and right boundaries, we have

      | j j k | j k + 1 j k 1 = number of indices strictly between j k and j k + 1 | G n \ G n | | G n G n | = b n .

    5. |i k+1i k | ⩽ b n + 1. Indeed, looking at Figure 2 and keeping in mind our conventions for the left and right boundaries, we have

      | i k + 1 i k | = i k + 1 i k = number of indices between i k and i k + 1 ( including i k + 1 ) | G n \ G n | + 1 | G n G n | + 1 = b n + 1 .

    6. x j k , n j k d n m ( G n ) + 2 b n d n by (3.6) (if j k J n ) or by direct verification (if j k = 0).

    7. x i k , n i k d n m ( G n ) by definition of m ( G n ) (if i k I n ) or by direct verification (if i k = 0).

    8. x i k + 1 , n i k + 1 d n max m ( G n ) , 1 d n + 1 d n = max m ( G n ) , 1 d n , where the quantity 1/d n takes into account the boundary case i k+1 = d n + 1.

Thus,

x j , n j d n | x j , n x j k , n | + x j k , n j k d n + j k d n j d n x j k + 1 , n x j k , n + 2 ε n + m ( G n ) + 2 b n d n + b n d n = | x i k + 1 , n x i k , n | + 2 ε n + m ( G n ) + 3 b n d n x i k + 1 , n i k + 1 d n + i k + 1 d n i k d n + i k d n x i k , n + 2 ε n + m ( G n ) + 3 b n d n max m ( G n ) , 1 d n + b n + 1 d n + m ( G n ) + 2 ε n + m ( G n ) + 3 b n d n 3 m ( G n ) + b n + 2 d n + 2 ε n + 3 b n d n .

In conclusion, by combining the two considered cases, for all j = 1 , , d n we have

x j , n j d n 3 m ( G n ) + b n + 2 d n + 2 ε n + 3 b n d n ,

which is a quantity independent of j and tending to 0 as n → ∞ (recall that b n = o(d n ) and d n d n as n → ∞). Thus, the thesis (3.5) follows. □

Figure 2: 
Illustration for the proof of Lemma 3.7.
Figure 2:

Illustration for the proof of Lemma 3.7.

3.4 Properties of continuous functions satisfying particular conditions

Lemma 3.8 highlights a property of continuous monotone functions on a compact interval. This property can be proved on the basis of the following more general results:

  1. if f : [ a , b ] R is continuous and strictly monotone, then its inverse f −1: f([a, b]) → [a, b] is continuous and strictly monotone;

  2. if f : [ a , b ] R is continuous and monotone, then f is uniformly continuous.

However, for the reader’s convenience, we include a direct proof of Lemma 3.8.

Lemma 3.8.

Let f : [ a , b ] R be continuous and strictly monotone on [a, b]. Then, for every δ > 0 there exists ɛ > 0 such that

[ f ( x ) ε , f ( x ) + ε ] f ( [ x δ , x + δ ] ) x [ a + δ , b δ ] .

Proof.

We prove the lemma under the assumption that f is strictly increasing on [a, b]; the proof in the case where f is strictly decreasing on [a, b] is completely analogous. Fix δ > 0. Since f is continuous and strictly increasing on [a, b], the function f δ + ( x ) = f ( x + δ ) f ( x ) is continuous and strictly positive on [a, bδ]. As a consequence, f δ + ( x ) has a strictly positive minimum ɛ + > 0 on [a, bδ]:

f ( x + δ ) f ( x ) ε + x [ a , b δ ] .

Similarly, the function f δ ( x ) = f ( x ) f ( x δ ) is continuous and strictly positive on [a + δ, b] and so it has a strictly positive minimum ɛ > 0 on [a + δ, b]:

f ( x ) f ( x δ ) ε x [ a + δ , b ] .

If we set ɛ = min(ɛ +, ɛ ) > 0, then we see that

f ( x + δ ) f ( x ) ε , f ( x ) f ( x δ ) ε , x [ a + δ , b δ ] ,

which is equivalent to

f ( x δ ) f ( x ) ε , f ( x ) + ε f ( x + δ ) , x [ a + δ , b δ ] .

Thus, for every x ∈ [a + δ, bδ] we have [f(x) − ɛ, f(x) + ɛ] ⊆ [f(xδ), f(x + δ)] = f([xδ, x + δ]), where the latter equality follows from the fact that f is continuous and monotone increasing. □

Lemma 3.9 shows that the preimage of any point through a continuous function f : [ a , b ] R having a finite number of local maximum/minimum points is finite.

Lemma 3.9.

Let I be a bounded real interval and let f : I R be continuous on I with a finite number of local maximum points and local minimum points. Then, for every λ R , the set f −1(λ) = {xI: f(x) = λ} is finite.

Proof.

For a given λ R , suppose that ξ, η are any two distinct points in f −1(λ) with ξ < η. The function f : [ ξ , η ] R is continuous by assumption, is not constant on [ξ, η] by the assumption that it has a finite number of local maximum/minimum points, and satisfies f(ξ) = f(η) = λ. Thus, at least one local extremum point of f (either the absolute maximum or the absolute minimum of f on [ξ, η]) lies in the open interval (ξ, η).

Now, suppose by contradiction that f −1(λ) is infinite for a certain λ R . Since f −1(λ) is contained in the compact interval I ̄ , it must have an accumulation point α I ̄ . Hence, we can find a strictly monotone sequence { ξ i } i f 1 ( λ ) such that ξ i α. In each interval between two consecutive points ξ i and ξ i+1 we have at least one local extremum point of f by the reasoning at the beginning of the proof. We conclude that f has infinitely many local extremum points, which is a contradiction to the hypothesis. □

Corollary 3.1.

Let I be a bounded real interval and let f : I R be continuous on I with a finite number of local maximum points, local minimum points, and discontinuity points. Then, for every λ R , the set f −1(λ) = {xI: f(x) = λ} is finite.

Proof.

Let a < b be the endpoints of the interval I and let a = η 0 < η 1 < … < η < η +1 = b be the discontinuity points of f, to which we also add the boundary points η 0 = a and η +1 = b. Note that the restriction f : ( η j , η j + 1 ) R is continuous and satisfies the assumptions of Lemma 3.9 for every j = 0, …, . Hence, for every λ R , the set

f 1 ( λ ) = x I : f ( x ) = λ j = 1 x ( η j , η j + 1 ) : f ( x ) = λ { η 0 , η 1 , , η + 1 }

is finite by Lemma 3.9. □

3.5 Proof of Theorem 2.2

The last results we need to prove Theorem 2.2 are the following two technical lemmas. Lemma 3.10 provides a straightforward estimate of the largest number of points taken from a uniform grid that can lie in a fixed interval; the proof is left to the reader. Lemma 3.11 is a slight generalization of ref. [1], Ex. 3.3].

Lemma 3.10.

Let h > 0 and let { ϑ i , h } i Z be a uniform grid in R with stepsize h, say ϑ i,h = x 0 + ih with x 0 R and i Z . Then, for any interval [ α , β ] R , we have

| { i Z : ϑ i , h [ α , β ] } | ( β α ) / h + 1 .

Lemma 3.11.

For every ɛ > 0, let { q n ( ε ) } n be a sequence of numbers such that q n (ɛ) → q(ɛ) as n → ∞ and q(ɛ) → 0 as ɛ → 0. Then, there exists a sequence of positive numbers { ε n } n such that ɛ n → 0 and q n (ɛ n ) → 0.

Proof.

Since q n (ɛ) → q(ɛ) for every ɛ > 0,

  1. for ɛ = 1 there exists n 1 such that |q n (1) − q(1)| ⩽ 1 for nn 1,

  2. for ε = 1 2 there exists n 2 > n 1 such that | q n ( 1 2 ) q ( 1 2 ) | 1 2 for nn 2,

  3. for ε = 1 3 there exists n 3 > n 2 such that | q n ( 1 3 ) q ( 1 3 ) | 1 3 for nn 3,

Define

  1. ɛ n = 1 for n < n 2,

  2. ε n = 1 2 for n 2n < n 3,

  3. ε n = 1 3 for n 3n < n 4,

By construction, ɛ n → 0 and |q n (ɛ n ) − q(ɛ n )| ⩽ ɛ n for nn 2, so |q n (ɛ n )| ⩽ |q(ɛ n )| + ɛ n for nn 2 and q n (ɛ n ) → 0. □

Proof of Theorem 2.2.

Let θ i,n = a + i(ba)/d n , i = 1, …, d n . It is clear that the grid { θ i , n } i = 1 , , d n [ a , b ] is a.u. in [a, b]. Hence, by Corollary 2.1, for every n there exists a permutation τ n of {1, …, d n } such that

(3.7) max i = 1 , , d n | f ( θ i , n ) λ τ n ( i ) , n | = max i = 1 , , d n | f ( θ i , n ) μ i , n | = ε n 0 ,

where for simplicity we have set μ i , n = λ τ n ( i ) , n for all i = 1, …, d n . Moreover, Λ n f([a, b]) by hypothesis. This implies that, for every n and every i = 1, …, d n , the set f −1(μ i,n ) is finite (by Corollary 3.1) and non-empty. Thus, we can define the grid G n = { x i , n } i = 1 , , d n [ a , b ] such that, for every n and every i = 1, …, d n , the point x i,n is chosen as one of the closest points to θ i,n in f −1(μ i,n ), i.e.,

(3.8) f ( x i , n ) = μ i , n , | x i , n θ i , n | = min x f 1 ( μ i , n ) | x θ i , n | = min x [ a , b ] : f ( x ) = μ i , n | x θ i , n | .

We show that G n is a.u. in [a, b] (provided that we arrange its points in increasing order). Once this is done, the theorem is proved.

For every δ, ɛ ⩾ 0 and every n, we define the “bad” sets

E δ , ε = x [ a + δ , b δ ] : [ f ( x ) ε , f ( x ) + ε ] f ( [ x δ , x + δ ] ) [ a , a + δ ) ( b δ , b ] , E δ , n = { i { 1 , , d n } : θ i , n E δ , ε n } .

We call them “bad” sets, because if i E δ , n , i.e., θ i , n E δ , ε n , then “things are fine” in the sense that |x i,n θ i,n | ⩽ δ. In formulas, for every δ > 0, every n and every i = 1, …, d n , we have

(3.9) i ( E δ , n ) c θ i , n ( E δ , ε n ) c | x i , n θ i , n | δ ,

where ( E δ , n ) c is the complement of E δ , n in {1, …, d n } and ( E δ , ε n ) c is the complement of E δ , ε n in [a, b]. To prove (3.9), suppose that θ i , n ( E δ , ε n ) c . Then, by definition of E δ , ε n , we have θ i,n ∈ [a + δ, bδ] and [f(θ i,n ) − ɛ n , f(θ i,n ) + ɛ n ] ⊆ f([θ i,n δ, θ i,n + δ]). Since μ i,n ∈ [f(θ i,n ) − ɛ n , f(θ i,n ) + ɛ n ] by (3.7), we infer that μ i,n f([θ i,n δ, θ i,n + δ]). Hence, there exists y i,n ∈ [θ i,n δ, θ i,n + δ] such that f(y i,n ) = μ i,n . But then we have y i,n f −1(μ i,n ) and |y i,n θ i,n | ⩽ δ, which implies |x i,n θ i,n | ⩽ |y i,n θ i,n | ⩽ δ by our choice of x i,n as one of the closest points to θ i,n in f −1(μ i,n ); see (3.8). This concludes the proof of (3.9).

Now, let a = ξ 0 < ξ 1 < … < ξ k < ξ k+1 = b be the local maximum points, local minimum points, and discontinuity points of f, to which we also add the boundary points ξ 0 = a and ξ k+1 = b. For every j = 0, …, k, the function f is continuous on (ξ j , ξ j+1) and has no local maximum/minimum points on (ξ j , ξ j+1), so it is strictly monotone on (ξ j , ξ j+1). Thus, by Lemma 3.8 applied to f : [ ξ j + δ / 2 , ξ j + 1 δ / 2 ] R , for every j = 0, …, k and every δ > 0 there exists ɛ (j,δ) > 0 such that

[ f ( x ) ε ( j , δ ) , f ( x ) + ε ( j , δ ) ] f ( [ x δ / 2 , x + δ / 2 ] ) f ( [ x δ , x + δ ] ) x [ ξ j + δ , ξ j + 1 δ ] .

Hence, for every δ > 0 there exists ɛ (δ) = min j=0,…,k ɛ (j,δ) > 0 such that

(3.10) [ f ( x ) ε ( δ ) , f ( x ) + ε ( δ ) ] f ( [ x δ , x + δ ] ) x j = 0 k [ ξ j + δ , ξ j + 1 δ ] .

For every δ > 0, let n δ be such that ɛ n ɛ (δ) for nn δ . If nn δ and i ∈ {1, …, d n } is an index such that θ i , n j = 0 k [ ξ j + δ , ξ j + 1 δ ] , then in particular θ i,n ∈ [a + δ, bδ] and, by (3.10),

[ f ( θ i , n ) ε n , f ( θ i , n ) + ε n ] f ( θ i , n ) ε ( δ ) , f ( θ i , n ) + ε ( δ ) f ( [ θ i , n δ , θ i , n + δ ] ) .

Hence, θ i , n E δ , ε n , i.e., i E δ , n . In follows that, for every δ > 0 and every nn δ ,

| E δ , n | = | { i { 1 , , d n } : θ i , n E δ , ε n } | i { 1 , , d n } : θ i , n j = 0 k + 1 [ ξ j δ , ξ j + δ ] ( k + 2 ) 2 δ d n b a + 1 ,

where the latter inequality is due to Lemma 3.10. We can therefore choose, by Lemma 3.11, a sequence of positive numbers { δ n } n such that δ n → 0 and | E δ n , n | / d n 0 .

To conclude the proof, let G n = { x i , n } i = 1 , , d n be the sequence of d n points defined as follows:

x i , n = x i , n , if θ i , n ( E δ n , ε n ) c , θ i , n , if θ i , n E δ n , ε n .

G n [ a , b ] and G n is a.u. in [a, b] because its distance from the a.u. grid { θ i , n } i = 1 , , d n is uniformly bounded by δ n → 0. Indeed, by (3.9),

max i = 1 , , d n | x i , n θ i , n | = max i { 1 , , d n } : θ i , n ( E δ n , ε n ) c | x i , n θ i , n | δ n .

The grid G n differs from the original grid G n by at most 2 | E δ n , ε n | = o ( d n ) elements, in the sense that | G n G n | 2 | E δ n , ε n | . Thus, by Lemma 3.7, G n is a.u. in [a, b] (provided that we arrange its points in increasing order). □

3.6 Concatenation lemma

The following lemma is a plain consequence of Definition 1.1 and has often been used in the literature, but lucid statement and proof have never been provided. We therefore provide the details below.

Lemma 3.12

(concatenation lemma). Let { A n } n be a matrix-sequence, let f : [ a , b ] C r × r be measurable, and suppose that { A n } n λ f . Let λ 1 ( f ) , , λ r ( f ) : [ a , b ] C be r measurable functions such that λ 1(f(x)), …, λ r (f(x)) are the eigenvalues of f(x) for every x ∈ [a, b]. Then { A n } n λ f ̃ , where f ̃ is the concatenation of resized versions of λ 1(f), …, λ r (f) given by

f ̃ : [ 0,1 ] C , f ̃ ( y ) = λ 1 ( f ( a + ( b a ) r y ) ) , 0 y < 1 r , λ 2 ( f ( a + ( b a ) ( r y 1 ) ) ) , 1 r y < 2 r , λ 3 ( f ( a + ( b a ) ( r y 2 ) ) ) , 2 r y < 3 r , λ r ( f ( a + ( b a ) ( r y r + 1 ) ) ) , r 1 r y 1 .

Proof.

The result follows from Definition 1.1, after observing that, for every F C c ( C ) ,

0 1 F ( f ̃ ( y ) ) d y = i = 1 r ( i 1 ) / r i / r F ( λ i ( f ( a + ( b a ) ( r y i + 1 ) ) ) ) d y = i = 1 r 1 ( b a ) r a b F ( λ i ( f ( x ) ) ) d x = 1 b a a b i = 1 r F ( λ i ( f ( x ) ) ) r d x ,

where in the second equality we have used the change of variable formula for the Lebesgue integral. □

3.7 Restriction operator and asymptotic spectral distribution of restricted matrix-sequences

For every n ⩾ 1, let Ξ n be the uniform grid in [0, 1] given by Ξ n = {i/(n + 1): i = 1, …, n}. If E R , we define d n E as the number of points of Ξ n inside E, i.e., d n E = | Ξ n E | . If A is a square matrix of size n and E R , we define R E (A) as the principal submatrix of A of size d n E obtained from A by selecting the rows and columns corresponding to indices i ∈ {1, …, n} such that i/(n + 1) ∈ E. For the proof of the next lemma, see [3], Lem. 4.9].

Lemma 3.13.

Let Ω ⊆ [0, 1] be a regular set with μ 1(Ω) > 0, let { Γ n } n be a sequence of measurable sets contained in [0, 1], and suppose that d d n Ω Γ n and d d n Ω Γ n = o ( d n ) . Then, for every matrix-sequence { A n } n formed by Hermitian matrices with A n of size d n , we have the equivalence

{ R Ω ( A n ) } n λ f { R Γ n ( A n ) } n λ f .

3.8 GLT sequences

Let { A n } n be a matrix-sequence and let ϰ : [ 0,1 ] × [ π , π ] C be a measurable function. We say that { A n } n is a GLT sequence with symbol ϰ, and we write { A n } n GLT ϰ , if the pair ( { A n } n , ϰ ) satisfies some special properties that are not reported here as they are difficult to formulate. The interested reader is referred to refs. [1], Ch. 8] and [7] for details. Here, we only collect the properties of GLT sequences that we need in the proof of Theorem 2.3. The first property is reported in the next lemma [3], Lem. 5.1].

Lemma 3.14.

Let { A n } n be a matrix-sequence formed by Hermitian matrices, let ϰ : [ 0,1 ] × [ π , π ] C be measurable, and suppose that { A n } n GLT ϰ . Then, { R Ω ( A n ) } n λ ϰ | Ω × [ π , π ] for every regular set Ω ⊆ [0, 1].

The second property is reported in the next lemma [32], Th. 2].

Lemma 3.15.

Let g : [ 0,1 ] C be measurable and let { D n } n be a matrix-sequence formed by diagonal matrices such that { D n } n λ g . Then, there exists a matrix-sequence formed by permutation matrices { P n } n such that P n D n P n T n GLT ϰ ( x , θ ) = g ( x ) .

The third property is reported in the next lemma, which has never appeared in the literature.

Lemma 3.16

(splitting of GLT sequences formed by diagonal matrices). Let g : [ 0,1 ] C be measurable and let { Λ n = { λ 1 , n , , λ d n , n } } n be a sequence of finite multisets of real numbers such that d n → ∞ as n → ∞. Assume the following:

  1. { D n } n GLT ϰ ( x , θ ) = g ( x ) with D n = diag ( λ 1 , n , , λ d n , n ) ;

  2. { L n , 1 } n , , { L n , k } n are sequences of natural numbers such that L n,1 + … + L n,k = d n for every n and L n,j /d n → 1/k as n → ∞ for every j = 1, …, k.

Then

diag i = 1 , , L n , j ( λ L n , 1 + + L n , j 1 + i , n ) n λ g | [ ( j 1 ) / k , j / k ] , j = 1 , , k .

Proof.

Let Ω j = [(j − 1)/k, j/k] for j = 1, …, k, and fix j ∈ {1, …, k}. Since { D n } n GLT ϰ ( x , θ ) = g ( x ) and the matrices D n are Hermitian, by Lemma 3.14 we have { R Ω j ( D n ) } n λ ϰ | Ω j × [ π , π ] , which is equivalent to

(3.11) { R Ω j ( D n ) } n λ g | Ω j .

Let

Γ n , j = L n , 1 + + L n , j 1 + i d n + 1 : i = 1 , , L n , j .

For every n, we have

d d n Ω j Γ n , j = i { 1 , , d n } : i d n + 1 Ω j Γ n , j = i { 1 , , d n } : i d n + 1 Ω j \ Γ n , j + i { 1 , , d n } : i d n + 1 Γ n , j \ Ω j = i { 1 , , d n } : i d n + 1 Ω j , i d n + 1 L n , 1 + + L n , j 1 d n + 1 + i { 1 , , d n } : i d n + 1 Ω j , i d n + 1 > L n , 1 + + L n , j d n + 1 + i { 1 , , d n } : i d n + 1 Γ n , j , i d n + 1 < j 1 k + i { 1 , , d n } : i d n + 1 Γ n , j , i d n + 1 > j k i { 1 , , d n } : j 1 k i d n + 1 L n , 1 + + L n , j 1 d n + 1 + i { 1 , , d n } : L n , 1 + + L n , j d n + 1 < i d n + 1 j k + i { 1 , , d n } : L n , 1 + + L n , j 1 d n + 1 < i d n + 1 < j 1 k + i { 1 , , d n } : j k < i d n + 1 L n , 1 + + L n , j d n + 1 i { 1 , , d n } : min j 1 k , L n , 1 + + L n , j 1 d n + 1 i d n + 1 max j 1 k , L n , 1 + + L n , j 1 d n + 1 + i { 1 , , d n } : min j k , L n , 1 + + L n , j d n + 1 i d n + 1 max j k , L n , 1 + + L n , j d n + 1 ( d n + 1 ) ( j 1 ) k L n , 1 L n , j 1 + ( d n + 1 ) j k L n , 1 L n , j + 2 ,

where the last inequality is due to Lemma 3.10. Since L n,i /d n → 1/k as n → ∞ for every i = 1, …, k by assumption, we have

1 d n ( d n + 1 ) ( j 1 ) k L n , 1 L n , j 1 0 , 1 d n ( d n + 1 ) j k L n , 1 L n , j 0 ,

and so d d n Ω j Γ n , j = o ( d n ) . Therefore, by (3.11) and Lemma 3.13,

{ R Γ n , j ( D n ) } n = diag i = 1 , , L n , j ( λ L n , 1 + + L n , j 1 + i , n ) n λ g | Ω j .

3.9 Proof of Lemma 2.1

We have now collected all the ingredients to prove Lemma 2.1.

Proof of Lemma 2.1.

By Lemma 3.12, the hypothesis { D n } n λ f is equivalent to { D n } n λ f ̃ , where f ̃ : [ 0,1 ] R is a concatenation of resized versions of the functions f 1, …, f k . More precisely,

f ̃ ( x ) = f j ( a + ( b a ) ( k x j + 1 ) ) , x j 1 k , j k , j = 1 , , k , f ̃ ( 1 ) = f k ( b ) .

By Lemma 3.15, there exists a sequence { τ n } n such that τ n is a permutation of {1, …, d n } and

{ D ̃ n = diag ( λ τ n ( 1 ) , n , , λ τ n ( d n ) , n ) } n GLT f ̃ ( x ) .

By Lemma 3.16, we conclude that

diag i = 1 , , L n , j ( λ τ n ( L n , 1 + + L n , j 1 + i ) , n ) n λ f ̃ | [ ( j 1 ) / k , j / k ] , j = 1 , , k .

Since

k ( j 1 ) / k j / k f ̃ | [ ( j 1 ) / k , j / k ] ( y ) d y = k ( j 1 ) / k j / k f j ( a + ( b a ) ( k y j + 1 ) ) d y = 1 b a a b f j ( x ) d x

(this is proved by direct computation using the change of variable formula for the Lebesgue integral as in the proof of Lemma 3.12), the thesis is proved with

D n , j = diag i = 1 , , L n , j ( λ τ n ( L n , 1 + + L n , j 1 + i ) , n ) , Λ n , j = { λ τ n ( L n , 1 + + L n , j 1 + i ) , n : i = 1 , , L n , j } .

3.10 Proof of Lemma 2.2

In order to prove Lemma 2.2, we need some auxiliary results. The first result is reported in the next lemma [1], Th. 3.1].

Lemma 3.17.

If { A n } n λ f , then

lim n | { i { 1 , , d n } : λ i ( A n ) ( E R ( f ) ) ε } | d n = 0 ε > 0 ,

where d n is the size of A n .

The second result is reported in the next lemma [1], Th. 3.2]. In what follows, the notation { Z n } n σ 0 means that { Z n } n is a matrix-sequence with an asymptotic singular value distribution described by the identically zero function defined on any subset Ω of some R d with 0 < μ d (Ω) < ∞. Hence, regardless of Ω, the notation { Z n } n σ 0 means that 1 d n i = 1 d n F ( σ i ( Z n ) ) F ( 0 ) for all F C c ( R ) , where d n is the size of Z n .

Lemma 3.18.

Let { Z n } n be a matrix-sequence with Z n of size d n . We have { Z n } n σ 0 if and only if Z n = R n + N n for every n with lim n ( d n ) 1 r a n k ( R n ) = lim n N n = 0 .

The third result is reported in the next lemma [1], Ex. 5.3].

Lemma 3.19.

Let { X n } n and { Y n } n be matrix-sequences formed by Hermitian matrices, with X n and Y n of the same size. If { X n } n λ f and { Y n } n σ 0 then { X n + Y n } n λ f .

The last result is the following lemma of graph theory. In what follows, given a directed graph G = ( V , E ) and any two nodes i, jV, a directed path from i to j is any sequence of nodes i 1 i 2i q such that i 1 = i, i q = j, and (i a , i a+1) ∈ E for all a = 1, …, q − 1. Note that a directed path from a node i to itself always exists (take the sequence i consisting only of the node i).

Lemma 3.20.

Let X be a finite set, and let A 1, , A k and B 1, , B k be two partitions of X with |A i | = |B i | for every i = 1, …, k. Let G = ( V , E ) be a directed graph on k nodes V = {1, …, k} such that a directed edge (i, j) ∈ E exists if and only if A i B j is not empty. If (i, j) ∈ E then there exists a directed path from j to i.

Proof.

Suppose by contradiction that (i, j) ∈ E but there is no directed path from j to i. Then, the sets of nodes

N i = { nodes with a directed path to  i } , N j = { nodes with a directed path from  j }

are disjoint. Moreover, there is no edge from N j to ( N j ) c , hence

x N j | A x | = x N j y V | A x B y | = x N j y N j | A x B y | , y N j | B y | = y N j x V | A x B y | | A i B j | + x N j y N j | A x B y | ,

where the last inequality follows by letting x vary in N j ∪ {i} instead of V. We have thus obtained a contradiction, because x N j | A x | = y N j | B y | and |A i B j | > 0. □

Proof of Lemma 2.2.

The hypotheses of Lemma 2.1 are satisfied with f 1, …, f k , Λ n as in the statement of Lemma 2.2 and with L n , j = | Λ ̃ n , j | for every n and every j = 1, …, k. Thus, by Lemma 2.1, for every n there exists a partition { Λ ̂ n , 1 , , Λ ̂ n , k } of Λ n such that, for every j = 1, …, k, the following properties hold:

  1. | Λ ̂ n , j | = L n , j = | Λ ̃ n , j | ;

  2. { Λ ̂ n , j } n f j , i.e., { D ̂ n , j } n λ f j , where D ̂ n , j = diag ( λ ̂ 1 , n ( j ) , , λ ̂ L n , j , n ( j ) ) and { λ ̂ 1 , n ( j ) , , λ ̂ L n , j , n ( j ) } = Λ ̂ n , j .

The partition { Λ ̂ n , 1 , , Λ ̂ n , k } satisfies the first two properties required in the thesis of the lemma, but it may not satisfy the third property. Through “successive displacements”, we want to change the partition { Λ ̂ n , 1 , , Λ ̂ n , k } into a new partition {Λ n,1, …, Λ n,k } that satisfies also the third property.

For every j = 1, …, k, since { D ̂ n , j } n λ f j , by Lemmas 3.11 and 3.17 there exists some δ n,j tending to 0 as n → ∞ such that

| Λ ̂ n , j ( E R ( f j ) ) δ n , j c | L n , j 0 as n .

Note that the previous limit relation continues to hold if we replace L n,j with d n (because L n , j / d n = | Λ ̃ n , j | / d n 1 / k by hypothesis) and δ n,j with δ n = max(δ n,1, …, δ n,k , ɛ n ), where ɛ n is the same as in the assumptions of the lemma. Thus, if we define

(3.12) E ̂ n , j = Λ ̂ n , j ( E R ( f j ) ) δ n c , j = 1 , , k ,

then we have the following: for every n there exists some δ n tending to 0 as n → ∞ such that ɛ n δ n and, for every j = 1, …, k,

(3.13) | E ̂ n , j | d n 0 as n .

We remark that, since Λ ̃ n , j ( E R ( f j ) ) ε n by assumption, we have

(3.14) E ̂ n , j ( E R ( f j ) ) δ n c ( E R ( f j ) ) ε n c ( Λ ̃ n , j ) c .

Now, fix n and take an element x E ̂ n , 1 E ̂ n , k . To fix ideas, suppose that x E ̂ n , 1 . By definition of E ̂ n , 1 we have x Λ ̂ n , 1 , and by (3.14) we have x Λ ̃ n , 1 . Since { Λ ̃ n , 1 , , Λ ̃ n , k } is a partition of Λ n just like { Λ ̂ n , 1 , , Λ ̂ n , k } , there exists p ∈ {1, …, k} with p ≠ 1 such that x Λ ̃ n , p . Note that all hypotheses of Lemma 3.20 are satisfied for X = Λ n and the partitions { A 1 , , A k } = { Λ ̂ n , 1 , , Λ ̂ n , k } and { B 1 , , B k } = { Λ ̃ n , 1 , , Λ ̃ n , k } , and moreover (1, p) is an edge of the graph G mentioned in Lemma 3.20 due to the element x Λ ̂ n , 1 Λ ̃ n , p . Hence, by Lemma 3.20, there exists in G a directed path from p to 1. This means that there exist indices

i 0 = 1 , i 1 = p , i 2 , i 3 , , i q , i q + 1 = 1

(with i 0, …, i q+1 ∈ {1, …, k} and i 0, …, i q distinct) and corresponding elements

x 0 = x , x 1 , x 2 , x 3 , , x q

(with x 0, …, x q ∈ Λ n ) such that

x s Λ ̂ n , i s Λ ̃ n , i s + 1 , s = 0 , , q .

As a consequence, we can produce a new partition { Λ ̄ n , 1 , , Λ ̄ n , k } of Λ n with the same cardinalities

| Λ ̄ n , j | = L n , j = | Λ ̂ n , j | , j = 1 , , k ,

by removing x s from Λ ̂ n , i s and adding it to Λ ̂ n , i s + 1 for s = 1, …, q. Note that, for every s = 0, …, q,

x s Λ ̃ n , i s + 1 ( E R ( f i s + 1 ) ) ε n ( E R ( f i s + 1 ) ) δ n ,

hence

(3.15) x s E ̂ n , i s + 1 , s = 0 , , q .

Therefore, if in analogy with (3.12) we define

(3.16) E ̄ n , j = Λ ̄ n , j ( E R ( f j ) ) δ n c , j = 1 , , k ,

then we have

E ̄ n , j E ̂ n , j , j = 1 , , k , | E ̄ n , 1 | = | E ̂ n , 1 | 1 ,

where the latter equation is due to the fact that x = x 0 has been removed from E ̂ n , 1 and has been replaced with x q E ̂ n , 1 ; see (3.15). In conclusion, starting from the original partition

{ Λ ̂ n , 1 , , Λ ̂ n , k } , { E ̂ n , 1 , , E ̂ n , k }

we have produced a new partition

{ Λ ̄ n , 1 , , Λ ̄ n , k } , { E ̄ n , 1 , , E ̄ n , k }

with the same cardinalities

| Λ ̄ n , 1 | = L n , 1 = | Λ ̂ n , 1 | , , | Λ ̄ n , k | = L n , k = | Λ ̂ n , k |

and with

E ̄ n , 1 E ̄ n , k E ̂ n , 1 E ̂ n , k .

We can now repeat the same procedure for another element x E ̄ n , 1 E ̄ n , k until all the “E-sets” are empty. At the end of the whole construction, we obtain a final partition

{ Λ n , 1 , , Λ n , k } , { E n , 1 , , E n , k }

with the same cardinalities

(3.17) | Λ n , 1 | = L n , 1 = | Λ ̂ n , 1 | , , | Λ n , k | = L n , k = | Λ ̂ n , k |

and with corresponding “E-sets”

(3.18) E n , 1 = = E n , k = ,

where E n,j is defined in analogy with (3.12) and (3.16) as follows:

(3.19) E n , j = Λ n , j ( E R ( f j ) ) δ n c , j = 1 , , k .

We prove that the partition {Λ n,1, …, Λ n,k } satisfies the three properties required in the thesis of the lemma.

The partition {Λ n,1, …, Λ n,k } satisfies the first property by (3.17) and the third property by (3.18) and (3.19). It only remains to prove that {Λ n,1, …, Λ n,k } satisfies the second property. To this end, we note that the above procedure must be repeated at most a number N n of times equal to

(3.20) N n = | E ̂ n , 1 E ̂ n , k | | E ̂ n , 1 | + + | E ̂ n , k | = o ( d n ) ,

because each time we apply the procedure, the union of the “E-sets” loses an element (the final equality in (3.20) is due to (3.13)). Moreover, each time we apply the procedure, the new partition { Λ n , 1 ( n e w ) , , Λ n , k ( n e w ) } differs from the previous partition { Λ n , 1 ( o l d ) , , Λ n , k ( o l d ) } by at most 1 element per set, in the sense that

| Λ n , j ( n e w ) \ Λ n , j ( o l d ) | 1 , j = 1 , , k .

For example, the first time we apply the procedure, we obtain

| Λ ̄ n , j \ Λ ̂ n , j | 1 , j = 1 , , k .

So, after N n applications of the procedure, we obtain

| Λ n , j \ Λ ̂ n , j | N n , j = 1 , , k .

Thus, for every j = 1, …, k, if we define D n,j as in the second property of the thesis of the lemma, the previous inequality implies that, after a suitable permutation of its diagonal elements, D n,j becomes equal to D ̂ n , j + Δ n , j with Δ n,j a diagonal matrix with r a n k ( Δ n , j ) = | Λ n , j \ Λ ̂ n , j | N n = o ( d n ) . This implies that { Δ n , j } n σ 0 by Lemma 3.18 and { D n , j } n λ f j by Lemma 3.19. □

3.11 Proof of Theorem 2.3

We have now collected all the ingredients to prove Theorem 2.3.

Proof of Theorem 2.3.

The theorem follows immediately from Lemma 2.2 and Corollary 2.1. Indeed, by Lemma 2.2, for every n there exists a partition {Λ n,1, …, Λ n,k } of Λ n such that, for every j = 1, …, k, the following properties hold:

  1. | Λ n , j | = | Λ ̃ n , j | ;

  2. { D n , j } n λ f j , where D n , j = diag ( λ 1 , n ( j ) , , λ | Λ n , j | , n ( j ) ) and { λ 1 , n ( j ) , , λ | Λ n , j | , n ( j ) } = Λ n , j ;

  3. Λ n,j ⊆ [inf[a,b] f j δ n , sup[a,b] f j + δ n ] for some δ n → 0 as n → ∞.

By Corollary 2.1, for every j = 1, …, k and every a.u. grid { x i , n ( j ) } i = 1 , , | Λ n , j | in [a, b] with { x i , n ( j ) } i = 1 , , | Λ n , j | [ a , b ] , if σ n,j and τ n,j are two permutations of {1, …, |Λ n,j |} such that the vectors [ f j ( x σ n , j ( 1 ) , n ( j ) ) , , f j ( x σ n , j ( | Λ n , j | ) , n ( j ) ) ] and [ λ τ n , j ( 1 ) , n ( j ) , , λ τ n , j ( | Λ n , j | ) , n ( j ) ] are sorted in increasing order, we have

max i = 1 , , | Λ n , j | | f j ( x σ n , j ( i ) , n ( j ) ) λ τ n , j ( i ) , n ( j ) | 0 as n .

4 Numerical experiments

In this section, after recalling some properties of Toeplitz matrices, we illustrate our main results through numerical examples.

4.1 Preliminaries on Toeplitz matrices

It is not difficult to see that the conjugate transpose of T n (f) is given by

T n ( f ) * = T n ( f ̄ )

for every fL 1([−π, π]) and every n; see, e.g., [1], Sect. 6.2]. In particular, if f is real a.e., then f ̄ = f a.e. and the matrices T n (f) are Hermitian. The next theorem collects some properties of Toeplitz matrices generated by a real function. For the proof, see [1], Theorems 6.1 and 6.5].

Theorem 4.1.

Let fL 1([−π, π]) be real and let m f = ess inf[−π,π] f and M f = ess sup[−π,π] f. Then, the following properties hold:

  1. T n (f) is Hermitian and the eigenvalues of T n (f) lie in the interval [m f , M f ] for all n;

  2. if f is not a.e. constant, then the eigenvalues of T n (f) lie in (m f , M f ) for all n;

  3. { T n ( f ) } n λ f .

4.2 Numerical examples

Example 4.1.

Let f ( θ ) = a + b cos θ : [ π , π ] R , with a , b R and b ≠ 0, and let Λ n = {λ 1,n , …, λ n,n } be the multiset consisting of the eigenvalues of the Hermitian Toeplitz matrix T n (f). By Theorem 4.1 and the fact that f is an even function, we have { Λ n } n f | [ 0 , π ] and Λ n ⊆ (min[0,π] f, max[0,π] f). Since f is continuous, f|[0,π] and Λ n satisfy all the assumptions of Corollary 2.1 and Theorem 2.2, and we therefore conclude the following.

  1. For every a.u. grid { θ i , n } i = 1 , , n in [0, π] with { θ i , n } i = 1 , , n [ 0 , π ] , we have

    max i = 1 , , n | f ( θ i , n ) λ τ n ( i ) , n | 0 as n ,

    where τ n is a suitable permutation of {1, …, n}.

  2. There exists an a.u. grid { θ i , n } i = 1 , , n in [0, π] with { θ i , n } i = 1 , , n [ 0 , π ] such that, for every n,

    λ τ n ( i ) , n = f ( θ i , n ) , i = 1 , , n ,

    where τ n is a suitable permutation of {1, …, n}.

The two previous assertions are actually well known in this case, because Λ n = {f(/(n + 1)): i = 1, …, n}; see [33], Th. 2.4].

Example 4.2.

Let f : [ π , π ] R ,

(4.1) f ( θ ) = 1 , 0 θ < π / 2 , θ + 1 π / 2 , π / 2 θ π , f ( θ ) , π θ < 0 ,

and let Λ n = {λ 1,n , …, λ n,n } be the multiset consisting of the eigenvalues of the Hermitian Toeplitz matrix T n (f). Figure 3 shows the graph of f over the interval [0, π]. By Theorem 4.1 and the fact that f is an even function, we have { Λ n } n f | [ 0 , π ] and Λ n ⊆ (min[0,π] f, max[0,π] f) = (1, 1 + π/2). Since f is continuous, f|[0,π] and Λ n satisfy all the assumptions of Corollary 2.1, and we therefore conclude that, for every a.u. grid { θ i , n } i = 1 , , n in [0, π] with { θ i , n } i = 1 , , n [ 0 , π ] , we have

(4.2) M n = max i = 1 , , n | f ( θ i , n ) λ τ n ( i ) , n | 0 as n ,

where τ n is the permutation of {1, …, n} that sorts λ 1,n , …, λ n,n in increasing order (note that f|[0,π] is increasing). To provide numerical evidence of (4.2), in Table 1 we compute M n for increasing values of n in the case of the a.u. grid θ i,n = /(n + 1), i = 1, …, n. We see from the table that M n → 0 as n → ∞, though the convergence is slow.

Now we observe that f|[0,π] and Λ n do not satisfy the assumptions of Theorem 2.2. Actually, they satisfy all the assumptions of Theorem 2.2 except the hypothesis that f has a finite number of local maximum/minimum points. Indeed, f is constant on [0, π/2] and so all points in [0, π/2) are both local maximum and local minimum points for f according to our Definition 2.1. We observe that, in fact, the thesis of Theorem 2.2 does not hold in this case, i.e., there is no a.u. grid { θ i , n } i = 1 , , n in [0, π] with { θ i , n } i = 1 , , n [ 0 , π ] such that, for every n,

λ τ n ( i ) , n = f ( θ i , n ) , i = 1 , , n ,

for a suitable permutation τ n of {1, …, n}. This is clear, because Λ n ⊂ (1, 1 + π/2) and so any grid { θ i , n } i = 1 , , n [ 0 , π ] satisfying the previous condition must be contained in (π/2, π), which implies that it cannot be a.u. in [0, π].

Figure 3: 

Example 4.2: Graph on the interval [0, π] of the function f(θ) defined in (4.1).
Figure 3:

Example 4.2: Graph on the interval [0, π] of the function f(θ) defined in (4.1).

Table 1:

Example 4.2: Computation of M n for increasing values of n.

n M n
8 0.0851
16 0.0632
32 0.0454
64 0.0312
128 0.0206
256 0.0132
512 0.0082
1,024 0.0050

Example 4.3.

Let f : [ π , π ] R ,

(4.3) f ( θ ) = cos ( 2 θ ) + cos ( 3 θ ) , 0 θ < π / 2 , θ , π / 2 θ π , f ( θ ) , π θ < 0 ,

and let Λ n = {λ 1,n , …, λ n,n } be the multiset consisting of the eigenvalues of the Hermitian Toeplitz matrix T n (f). Figure 4 shows the graph of f over the interval [0, π]. By Theorem 4.1 and the fact that f is an even function, we have { Λ n } n f | [ 0 , π ] and Λ n ( min [ 0 , π ] f , max [ 0 , π ] f ) = ( 25 54 10 10 27 , π ) . Note that the function f|[0,π] is not continuous, but it satisfies anyway all the assumptions of Corollary 2.1 and Theorem 2.2, and we therefore conclude the following.

  1. For every a.u. grid { θ i , n } i = 1 , , n in [0, π] with { θ i , n } i = 1 , , n [ 0 , π ] , we have

    (4.4) M n = max i = 1 , , n | f ( θ σ n ( i ) , n ) λ τ n ( i ) , n | 0 as n ,

    where σ n and τ n are two permutations of {1, …, n} such that the vectors [ f ( θ σ n ( 1 ) , n ) , , f ( θ σ n ( n ) , n ) ] and [ λ τ n ( 1 ) , n , , λ τ n ( n ) , n ] are sorted in increasing order.

  2. There exists an a.u. grid { θ i , n } i = 1 , , n in [0, π] with { θ i , n } i = 1 , , n [ 0 , π ] such that, for every n,

    λ τ n ( i ) , n = f ( θ i , n ) , i = 1 , , n ,

    where τ n is a suitable permutation of {1, …, n}.

To provide numerical evidence of (4.4), in Table 2 we compute M n for increasing values of n in the case of the a.u. grid θ i,n = /(n + 1), i = 1, …, n. We see from the table that M n → 0 as n → ∞, though the convergence is slow.

Figure 4: 

Example 4.3: Graph on the interval [0, π] of the function f(θ) defined in (4.3).
Figure 4:

Example 4.3: Graph on the interval [0, π] of the function f(θ) defined in (4.3).

Table 2:

Example 4.3: Computation of M n for increasing values of n.

n M n
8 0.7220
16 0.5625
32 0.4471
64 0.2956
128 0.1783
256 0.1096
512 0.0605
1,024 0.0373

Example 4.4.

Consider the following second-order differential problem:

( a ( x ) u ( x ) ) = g ( x ) , x ( 0,1 ) , u ( 0 ) = α , u ( 1 ) = β ,

where a : [ 0,1 ] R is assumed to be continuous and non-negative on [0, 1]. In the classical finite difference method based on second-order central finite differences over the uniform grid x i = i/(n + 1), i = 0, …, n + 1, the computation of the numerical solution reduces to solving a linear system whose coefficient matrix is the symmetric n × n tridiagonal matrix given by

A n = a 1 / 2 + a 3 / 2 a 3 / 2 a 3 / 2 a 3 / 2 + a 5 / 2 a 5 / 2 a 5 / 2 a n 1 / 2 a n 1 / 2 a n 1 / 2 + a n + 1 / 2 ,

where a i = a(x i ) for all i in the real interval [0, n + 1]; see [1], Sect. 10.5.1] for more details. Let f ( x , θ ) = a ( x ) ( 2 2 cos θ ) : [ 0,1 ] × [ 0 , π ] R , and let Λ n = {λ 1,n , …, λ n,n } be the multiset consisting of the eigenvalues of A n . We know from ref. [1], Th. 10.5] that { A n } n λ f , i.e., { Λ n } n f . Moreover, in view of the dyadic decomposition of A n in ref. [34], Sect. 2], we have

Λ n λ min ( T n ( 2 2 cos θ ) ) min [ 0,1 ] a , λ max ( T n ( 2 2 cos θ ) ) max [ 0,1 ] a 0,4 max [ 0,1 ] a = min [ 0,1 ] × [ 0 , π ] f , max [ 0,1 ] × [ 0 , π ] f = f ( [ 0,1 ] × [ 0 , π ] ) = E R ( f ) ,

where the latter equality follows from the continuity of f and the fact that the domain [0, 1] × [0, π] is not “too wild” (in particular, it is contained in the closure of its interior); see [1], Ex. 2.1].

Now, following the notations of Theorem 2.1, let a = (0, 0) and b = (1, π), so that [ a , b ] = [0, 1] × [0, π]. Assume that n is a perfect square, let n = n ( n ) = ( n , n ) , consider the a.u. grid in [ a , b ] given by

G n ( n ) = x i , n ( n ) i = 1 , , n , x i , n ( n ) = a + i ( b a ) n = i 1 n , i 2 π n , i = 1 , , n ,

and let [ f 1 , n , , f n , n ] = [ f i , n ] i = 1 , , n be the same as the vector [ f ( x 1 , n ( n ) ) , , f ( x n , n ( n ) ) ] = [ f ( x i , n ( n ) ) ] i = 1 , , n but indexed with a 1-index i = 1, …, n instead of a 2-index i = 1, …, n . Then, by Theorem 2.1,

(4.5) M n = max i = 1 , , n | f σ n ( i ) , n λ τ n ( i ) , n | 0 as n ,

where σ n and τ n are two permutations of {1, …, n} such that the vectors

[ f σ n ( 1 ) , n , , f σ n ( n ) , n ] = [ f σ n ( i ) , n ] i = 1 , , n , [ λ τ n ( 1 ) , n , , λ τ n ( n ) , n ] = [ λ τ n ( i ) , n ] i = 1 , , n

are sorted in increasing order. To provide numerical evidence of (4.5), in Table 3 we compute M n for increasing values of n and different choices of a(x). In all cases, we see from the table that M n → 0 as n → ∞, though the convergence is slow.

Table 3:

Example 4.4: Computation of M n for increasing values of n and different choices of a(x).

(a) a(x) = ex (b) a(x) = 2 + cos(3x) (c) a(x) = x log(1 + x)
n M n n M n n M n
900 0.0684 900 0.1471 900 0.1240
1,600 0.0559 1,600 0.1132 1,600 0.0915
2,500 0.0473 2,500 0.0890 2,500 0.0717
3,600 0.0411 3,600 0.0738 3,600 0.0583
4,900 0.0364 4,900 0.0634 4,900 0.0497
6,400 0.0326 6,400 0.0558 6,400 0.0435
8,100 0.0296 8,100 0.0484 8,100 0.0383
10,000 0.0271 10,000 0.0436 10,000 0.0344

Example 4.5.

Consider the two-dimensional Poisson problem

Δ u ( x ) = g ( x ) , x ( 0,1 ) 2 , u ( x ) = h ( x ) , x ( ( 0,1 ) 2 ) .

In the isogeometric Galerkin discretization based on tensor-product biquadratic B-splines defined over the uniform grid i / n for i = 0, …, n and n = n (n) = (n, n), the computation of the numerical solution reduces to solving a linear system whose coefficient matrix is the symmetric n 2 × n 2 matrix given by

A n = K n M n + M n K n ,

where ⊗ is the Kronecker tensor product and K n , M n are the symmetric n × n matrices given by

K n = 1 6 8 1 1 1 6 2 1 1 2 6 2 1 1 2 6 2 1 1 2 6 1 1 1 8 , M n = 1 120 40 25 1 25 66 26 1 1 26 66 26 1 1 26 66 26 1 1 26 66 25 1 25 40 ;

see [2], Sect. 7.6] for more details. Let

f : [ 0 , π ] 2 R , f ( θ 1 , θ 2 ) = ϰ ( θ 1 ) μ ( θ 2 ) + μ ( θ 1 ) ϰ ( θ 2 ) ,

where

ϰ ( θ ) = 1 2 3 cos θ 1 3 cos ( 2 θ ) , μ ( θ ) = 11 20 + 13 30 cos θ + 1 60 cos ( 2 θ ) ,

and let Λ n = λ 1 , n , , λ n 2 , n be the multiset consisting of the eigenvalues of A n . We know from ref. [2], Th. 7.7] that { A n } n λ f , i.e., { Λ n } n f . Moreover, numerical experiments reveal that there are no outliers, i.e.,

Λ n 0 , 3 2 = min [ 0 , π ] 2 f , max [ 0 , π ] 2 f = f ( [ 0 , π ] 2 ) = E R ( f )

for all n. Thus, Theorem 2.1 applies in this case. In fact, in view of the spectral decompositions obtained in ref. [35], Sect. 3.3], the eigenvalues of A n are exactly given by

f ( x i , n ( n ) ) , i = 1 , , n ,

where n = n (n) = (n, n) as above and G n ( n ) = { x i , n ( n ) } i = 1 , , n is the a.u. grid in [0, π]2 given by

x i , n ( n ) = i 1 π n , i 2 π n , i = 1 , , n .

Example 4.6.

Consider the following second-order differential problem:

u ( x ) = g ( x ) , x ( 0,1 ) , u ( 0 ) = α , u ( 1 ) = β .

In the classical Galerkin method with basis functions given by the 2n − 1 B-splines of degree 2 and smoothness C 0([0, 1]) defined over the uniform knot sequence { 0,0,0 , 1 n , 1 n , 2 n , 2 n , , n 1 n , n 1 n , 1,1,1 } and vanishing at the boundary points x = 0 and x = 1, the computation of the numerical solution reduces to solving a linear system whose coefficient matrix is the symmetric (2n − 1) × (2n − 1) matrix given by

see [22], Sect. 2.3.2] for more details. Let

f ( θ ) = 1 3 4 2 2 e i θ 2 2 e i θ 8 4 cos θ : [ 0 , π ] C 2 × 2 ,

and let Λ n = {λ 1,n , …, λ 2n−1,n } be the multiset consisting of the eigenvalues of A n (sorted in increasing order for later convenience). We know from refs. [5], Th. 6.5] and [22], Sect. 2.3.2] that { A n } n λ f , i.e., { Λ n } n f . By Definition 1.1, the latter is equivalent to { Λ n } n diag ( f 1 , f 2 ) , where f 1 , f 2 : [ 0 , π ] R are given by

(4.6) f 1 ( θ ) = λ 1 ( f ( θ ) ) = 2 2 3 cos θ 2 3 3 + cos 2 θ ,

(4.7) f 2 ( θ ) = λ 2 ( f ( θ ) ) = 2 2 3 cos θ + 2 3 3 + cos 2 θ .

Figure 5 shows the graphs of the functions f 1,2(θ) and the set of eigenvalues Λ n for n = 20. The eigenvalues λ i,n , i = 1, …, 2n − 1, are positioned at /n for i = 1, …, n and (in)π/n for i = n + 1, …, 2n − 1. Note that

E R ( f 1 ) = f 1 ( [ 0 , π ] ) = [ f 1 ( 0 ) , f 1 ( π ) ] = 0 , 4 3 , E R ( f 2 ) = f 2 ( [ 0 , π ] ) = [ f 2 ( 0 ) , f 2 ( π ) ] = 8 3 , 4 .

From the figure, we may assume that the hypotheses of Theorem 2.3 are satisfied with the partition { Λ ̃ n , 1 , Λ ̃ n , 2 } of Λ n given by Λ ̃ n , 1 = { λ 1 , n , , λ n , n } and Λ ̃ n , 2 = { λ n + 1 , n , , λ 2 n 1 , n } . Thus, by Theorem 2.3, for every n there exists a partition {Λ n,1, Λ n,2} of Λ n (which must necessarily coincide with the original partition { Λ ̃ n , 1 , Λ ̃ n , 2 } ) with the following properties:

  1. | Λ n , 1 | = | Λ ̃ n , 1 | = n and | Λ n , 2 | = | Λ ̃ n , 2 | = n 1 ;

  2. Λ n , 1 δ n , 4 3 + δ n and Λ n , 2 8 3 δ n , 4 + δ n for some δ n → 0 as n → ∞;

  3. { Λ n , 1 } n f 1 and { Λ n , 2 } n f 2 ;

  4. if { θ i , n } i = 1 , , n and { ϑ i , n } i = 1 , , n 1 are any two a.u. grids in [0, π] contained in [0, π], then

    (4.8) M n , 1 = max i = 1 , , n | f 1 ( θ i , n ) λ i , n | 0 as n ,

    (4.9) M n , 2 = max i = 1 , , n 1 | f 2 ( ϑ i , n ) λ i + n , n | 0 as n .

Actually, we can say more than (4.8) and (4.9). Indeed, numerical experiments reveal that M n,1 = M n,2 = 0 for all n if we choose the a.u. grids suggested by Figure 5, i.e., θ i,n = /n, i = 1, …, n, and ϑ i,n = /n, i = 1, …, n − 1. In other words, the eigenvalues of A n are explicitly given by

{ f 1 ( θ i , n ) : i = 1 , , n } { f 2 ( ϑ i , n ) : i = 1 , , n 1 } .

We refer the reader to Appendix A for further explicit formulas for the eigenvalues of B-spline Galerkin discretization matrices. These formulas have been obtained through numerical experiments and provide further confirmations of Theorem 2.3.

Figure 5: 

Example 4.6: Graphs of the functions f
1,2(θ) in (4.6)–(4.7) and set of eigenvalues Λ
n
 for n = 20.
Figure 5:

Example 4.6: Graphs of the functions f 1,2(θ) in (4.6)(4.7) and set of eigenvalues Λ n for n = 20.

5 Conclusions

We have provided new insights into the notion of asymptotic (spectral) distribution by extending previous results due to Bogoya, Böttcher, Grudsky, and Maximenko [25], [26]. In particular, using the concept of monotone rearrangement (quantile function) and matrix analysis arguments from the theory of GLT sequences, we have shown that, under suitable assumptions, if the asymptotic distribution of a sequence of multisets Λ n = { λ 1 , n , , λ d n , n } is described by a function f in the sense of Definition 1.2, then we observe the uniform convergence to 0 of the difference between a proper permutation of the vector [ λ 1 , n , , λ d n , n ] and the vector of samples of f over an a.u. grid in the domain of f. We have also illustrated through numerical experiments the main results of the paper.

We conclude this paper with a remark. The notion of asymptotic distribution given in Definition 1.2 is deeply connected with the notion of vague convergence of probability measures, which is also referred to as convergence in distribution in ref. [26]. More precisely, as shown in ref. [32]:

  1. if { Λ n } n is as in Definition 1.2, then we can associate with each Λ n the atomic probability measure on C defined as

    μ Λ n = 1 d n i = 1 d n δ λ i , n ,

    where δ z is the Dirac probability measure such that δ z (E) = 1 if zE and δ z (E) = 0 otherwise;

  2. if f is as in Definition 1.2 with k = 1, then we can associate with f a uniquely determined probability measure μ f on C such that

    1 μ k ( D ) D F ( f ( x ) ) d x = C F ( z ) d μ f ( z ) F C c ( C ) .

The asymptotic distribution relation

lim n 1 d n i = 1 d n F ( λ i , n ) = 1 μ d ( Ω ) Ω F ( f ( x ) ) d x F C c ( C )

can therefore be rewritten as

lim n C F ( z ) d μ Λ n ( z ) = C F ( z ) d μ f ( z ) F C c ( C ) ,

which is equivalent to saying that μ Λ n converges vaguely to μ f [36], Def. 13.12]. This equivalence allows for a reinterpretation of the main results of this paper in a probabilistic perspective. In this regard, it is worth pointing out that any multiset of complex numbers Λ n = { λ 1 , n , , λ d n , n } coincides with the spectrum of a matrix A n (take A n = diag ( λ 1 , n , , λ d n , n ) ) and any probability measure μ on C coincides with μ f for some f [32], Cor. 1].


Corresponding author: Stefano Serra-Capizzano, Department of Science and High Technology, University of Insubria, Como, Italy; and Division of Scientific Computing, Department of Information Technology, Uppsala University, Uppsala, Sweden, E-mail: 

Acknowledgments

Giovanni Barbarino, Carlo Garoni and Stefano Serra-Capizzano are members of the Research Group GNCS (Gruppo Nazionale per il Calcolo Scientifico) of INdAM (Istituto Nazionale di Alta Matematica).

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission. Giovanni Barbarino provided the proofs of the main results of the paper. Carlo Garoni revised/fine-tuned/extended these proofs and wrote the paper. Sven-Erik Ekström, Stefano Serra-Capizzano and Paris Vassalos had the idea of this paper and wrote the original version of the manuscript. Sven-Erik Ekström and David Meadon performed the numerical experiments in Section 4 and Appendix A.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interest: The authors state no conflict of interest.

  6. Research funding: Giovanni Barbarino was supported by the Alfred Kordelinin Säätiö Grant 210122 and the European Research Council (ERC) Consolidator Grant 101085607 through the Project eLinoR. Carlo Garoni was supported by an INdAM-GNCS Project (CUP E53C22001930001) and by the Department of Mathematics of the University of Rome Tor Vergata through the MUR Excellence Department Project MatMod@TOV (CUP E83C23000330006) and the Project RICH_GLT (CUP E83C22001650005). David Meadon was funded by the Centre for Interdisciplinary Mathematics (CIM) at Uppsala University. Stefano Serra-Capizzano was funded by the PRIN-PNRR (Piano Nazionale di Ripresa e Resilienza) Project MATHPROCULT (MATHematical tools for predictive maintenance and PROtection of CULTural heritage, Code P20228HZWR, CUP J53D23003780006) and by the European High-Performance Computing Joint Undertaking (JU) under Grant Agreement 955701. The JU receives support from the European Union's Horizon 2020 Research and Innovation Programme and Belgium, France, Germany, Switzerland. Stefano Serra-Capizzano is also grateful to the Theory, Economics and Systems Laboratory (TESLAB) of the Department of Computer Science at the Athens University of Economics and Business for providing financial support.

  7. Data availability: Not applicable.

Appendix A: Formulas for the eigenvalues of B-spline Galerkin discretization matrices

Consider the following second-order differential eigenvalue problem:

u j ( x ) = λ j u j ( x ) , x ( 0,1 ) , u j ( 0 ) = 0 , u j ( 1 ) = 0 .

Let p ⩾ 1 and 0 ⩽ kp − 1. In the classical Galerkin method with basis functions given by the n(pk) + k − 1 B-splines B 2,p,k , …, B n(pk)+k,p,k of degree p and smoothness C k ([0, 1]) defined over the uniform knot sequence

0 , , 0 p + 1 , 1 n , , 1 n p k , 2 n , , 2 n p k , , n 1 n , , n 1 n p k , 1 , , 1 p + 1

and vanishing at the boundary points x = 0 and x = 1, the computation of the numerical solution reduces to solving a linear system whose coefficient matrix is the (n(pk) + k − 1) × (n(pk) + k − 1) matrix given by

L n , p , k = M n , p , k 1 K n , p , k ,

where K n,p,k , M n,p,k are the symmetric positive definite matrices given by

K n , p , k = 0 1 B j + 1 , p , k ( x ) B i + 1 , p , k ( x ) d x i , j = 1 n ( p k ) + k 1 , M n , p , k = 0 1 B j + 1 , p , k ( x ) B i + 1 , p , k ( x ) d x i , j = 1 n ( p k ) + k 1 ;

see [22], Sect. 2.5] for more details. We remark that, for p = 2 and k = 1, the matrix n −1 K n,2,1 coincides with the matrix A n of Example 4.6. As proved in ref. [5], Th. 6.17], we have

n 1 K n , p , k n λ f p , k , { n M n , p , k } n λ h p , k , n 2 L n , p , k n λ e p , k = ( h p , k ) 1 f p , k ,

where:

  1. the functions f p , k , h p , k : [ 0 , π ] C ( p k ) × ( p k ) are given by

    f p , k ( θ ) = Z K p , k [ ] e i θ = K p , k [ 0 ] + > 0 K p , k [ ] e i θ + K p , k [ ] T e i θ , h p , k ( θ ) = Z M p , k [ ] e i θ = M p , k [ 0 ] + > 0 M p , k [ ] e i θ + M p , k [ ] T e i θ ;

  2. the blocks K p , k [ ] , M p , k [ ] are given by

    K p , k [ ] = R β j , p , k ( t ) β i , p , k ( t ) d t i , j = 1 p k , Z , M p , k [ ] = R β j , p , k ( t ) β i , p , k ( t ) d t i , j = 1 p k , Z ;

  3. the functions β 1 , p , k , , β p k , p , k : R R are the first pk B-splines defined on the knot sequence

    0 , , 0 p k , 1 , , 1 p k , , η , , η p k , η = p + 1 p k .

We remark that, for every θ ∈ [0, π], the matrix f p,k (θ) is Hermitian positive semidefinite and the matrix h p,k (θ) is Hermitian positive definite; see [22], Rem. 2.1]. The following Maple worksheet computes f p,k (θ) and h p,k (θ) for the input pair p, k defined at the beginning. Here, we have chosen p = 2 and k = 0 for comparison with Example 4.6.

>p ≔ 2: k ≔ 0:
> η c e i l p + 1 p k :
> with(CurveFitting): with(LinearAlgebra):
> R efe r e n c e K n o t S e q u e n c e s e q ( s e q ( j , i = 1 . . p k ) , j = 0 . . η ) :
> # Construction of the reference B-splines β 1,p,k , …, β pk,p,k
> β ≔ [ ]:
     f o r i f r o m 1 t o p k d o
     β [ op ( β ) , B Sp l i n e ( p + 1 , t , k n o t s = R efe r e n c e K n o t S e q u e n c e [ i . . i + p + 1 ] ) ] :
     o d :
> # Derivatives of the reference B-splines β 1,p,k , …, β pk,p,k
> D_βsimplify(diff(β, t)):
> # C o n s t r u c t i o n o f t h e n o n z e r o K _ b l o c k s K p , k [ ] a n d t h e n o n z e r o M _ b l o c k s M p , k [ ]
> Kblocks≔ [ ]: Mblocks≔ [ ]:
     f o r l f r o m 0 t o η 1 d o
     K M a t r i x ( p k ) : M M a t r i x ( p k ) :
     f o r r f r o m 1 t o p k d o : f o r s f r o m 1 t o p k d o
     K ( r , s ) 0 η D _ β [ s ] e v a l ( D _ β [ r ] , t = t l ) d t : M ( r , s ) 0 η β [ s ] e v a l ( β [ r ] , t = t l ) d t :
     o d : o d :
     K b l o c k s [ op ( K b l o c k s ) , K ] : M b l o c k s [ op ( M b l o c k s ) , M ] :
     o d :
> # Construction of the functions f = f p,k and h = h p,k
> f(θ) ≔ Kblocks[1]: h(θ) ≔ Mblocks[1]:
     f o r j f r o m 2 t o η d o
     f ( θ ) simplify ( f ( θ ) + K b l o c k s [ j ] exp ( I ( j 1 ) θ ) + T r a n sp o s e ( K b l o c k s [ j ] ) exp ( I ( j 1 ) θ ) ) :
     h ( θ ) simplify ( h ( θ ) + M b l o c k s [ j ] exp ( I ( j 1 ) θ ) + T r a n sp o s e ( M b l o c k s [ j ] ) exp ( I ( j 1 ) θ ) ) :
     o d : f ( θ ) , h ( θ )
4 3 2 3 2 e I θ 3 2 3 2 e I θ 3 8 3 4 cos ( θ ) 3 , 2 15 1 10 + e I θ 10 1 10 + e I θ 10 2 5 + cos ( θ ) 15

In Theorems A.1A.3, we report the formulas for the eigenvalues of the matrices n −1 K n,p,k , nM n,p,k , n −2 L n,p,k that we obtained through high-precision numerical computations performed in Julia, using an accuracy of at least 100 decimal digits. In what follows, for every θ ∈ [0, π] and every j = 1, …, pk, we define λ j (f p,k (θ)) (resp., λ j (h p,k (θ)), λ j (e p,k (θ))) as the jth eigenvalue of f p,k (θ) (resp., h p,k (θ), e p,k (θ)) according to the increasing ordering:

λ 1 ( f p , k ( θ ) ) λ p k ( f p , k ( θ ) ) , θ [ 0 , π ] , λ 1 ( h p , k ( θ ) ) λ p k ( h p , k ( θ ) ) , θ [ 0 , π ] , λ 1 ( e p , k ( θ ) ) λ p k ( e p , k ( θ ) ) , θ [ 0 , π ] .

Moreover, for every n ⩾ 1, we define the uniform grids

Θ n = i π n : i = 0 , , n , Θ n 0 = Θ n \ { 0 } , Θ n π = Θ n \ { π } , Θ n 0 , π = Θ n \ { 0 , π } .

Theorem A.1.

Let 1 ⩽ p ⩽ 100 and 0 ⩽ k ⩽ min(1, p − 1). Then, for every n = 1, …, 100, the eigenvalues of n −1 K n,p,k are given by

λ j ( f p , k ( θ ) ) , θ Θ n , p , k ( j ) , j = 1 , , p k ,

where the grid Θ n , p , k ( j ) belongs to Θ n , Θ n 0 , Θ n π , Θ n 0 , π and is given in Figure A.1.

Figure A.1: 
Grid 




Θ


n
,
p
,
k



(

j

)





${{\Theta}}_{n,p,k}^{\left(j\right)}$



 for the values of n, p, k considered in Theorem A.1 and for j = 1, …, p − k. For example, for p = 2, k = 0, and every n = 1, …, 100, we have 




Θ


n
,
2,0



(

1

)



=


Θ


n


0




${{\Theta}}_{n,2,0}^{\left(1\right)}={{\Theta}}_{n}^{0}$



 and 




Θ


n
,
2,0



(

2

)



=


Θ


n


0
,
π




${{\Theta}}_{n,2,0}^{\left(2\right)}={{\Theta}}_{n}^{0,\pi }$



, in accordance with Example 4.6.
Figure A.1:

Grid Θ n , p , k ( j ) for the values of n, p, k considered in Theorem A.1 and for j = 1, …, pk. For example, for p = 2, k = 0, and every n = 1, …, 100, we have Θ n , 2,0 ( 1 ) = Θ n 0 and Θ n , 2,0 ( 2 ) = Θ n 0 , π , in accordance with Example 4.6.

Conjecture A.1.

Theorem A.1 continues to hold if we replace “for every n = 1, …, 100” with “for every n ⩾ 1”.

To simplify the statement of Theorem A.2, we define the integer sequence

a ( m ) = m + 8 m , m 1 .

The sequence { a m } m = 1,2 , is referred to as the A186348 sequence in the on-line encyclopedia of integer sequences (OEIS); see https://oeis.org/A186348. For every p ⩾ 3, we define

α p = minimum positive integer such that  p m = 1 α p a ( m ) , m = 1 α p + 1 a ( m ) .

It is not difficult to check that { p α p } p = 3,4 , is an increasing sequence such that pα p ⩾ 2 for all p ⩾ 3.

Theorem A.2.

Let 1 ⩽ p ⩽ 100 and 0 ⩽ k ⩽ min(1, p − 1). Then, for every n = 1, …, 100, the eigenvalues of nM n,p,k are given by

λ j ( h p , k ( θ ) ) , θ Θ n , p , k [ j ] , j = 1 , , p k ,

where the grid Θ n , p , k [ j ] belongs to Θ n , Θ n 0 , Θ n π , Θ n 0 , π and is defined as follows:

Θ n , p , 0 [ j ] = Θ n 0 , if p + j is odd , Θ n π , if p + j is even and j p , Θ n 0 , π , if j = p , Θ n , p , 1 [ j ] = Θ n , if p > 2 and j = p α p 1 , Θ n 0 , if p = 2 ; or if p > 2 and either j < p α p 1 and p + j is odd or p α p 1 < j < p 1 and p + j is even , Θ n π , if p > 2 and either j < p α p 1 and p + j is even or p α p 1 < j < p 1 and p + j is odd , Θ n 0 , π , if p > 2 and j = p 1 .

Conjecture A.2.

Theorem A.2 continues to hold if we replace “1 ⩽ p ⩽ 100” with “p ⩾ 1” and “for every n = 1, …, 100” with “for every n ⩾ 1”.

Theorem A.3.

Let 1 ⩽ p ⩽ 100 and 0 ⩽ k ⩽ min(1, p − 1). Then, for every n = 1, …, 20, the eigenvalues of n −2 L n,p,k are given by

λ j ( e p , k ( θ ) ) , θ Θ n , p , k j , j = 1 , , p k ,

where the grid Θ n , p , k j belongs to Θ n , Θ n 0 , Θ n π , Θ n 0 , π and is defined as follows:

Θ n , p , k j = Θ n , if p + j i s odd and j > 1 , Θ n 0 , if p + j is odd and j = 1 , Θ n 0 , π , if p + j is even .

Conjecture A.3.

Theorem A.3 continues to hold if we replace “1 ⩽ p ⩽ 100” with “p ⩾ 1” and “for every n = 1, …, 20” with “for every n ⩾ 1”.

References

[1] C. Garoni and S. Serra-Capizzano, Generalized Locally Toeplitz Sequences: Theory and Applications, vol. I, Cham, Springer, 2017.10.1007/978-3-319-53679-8Suche in Google Scholar

[2] C. Garoni and S. Serra-Capizzano, Generalized Locally Toeplitz Sequences: Theory and Applications, vol. II, Cham, Springer, 2018.10.1007/978-3-030-02233-4Suche in Google Scholar

[3] G. Barbarino, “A systematic approach to reduced GLT,” BIT Numer. Math., vol. 62, pp. 681–743, 2022, https://doi.org/10.1007/s10543-021-00896-7.Suche in Google Scholar

[4] G. Barbarino, C. Garoni, M. Mazza, and S. Serra-Capizzano, “Rectangular GLT sequences,” Electron. Trans. Numer. Anal., vol. 55, pp. 585–617, 2022, https://doi.org/10.1553/etna_vol55s585.Suche in Google Scholar

[5] G. Barbarino, C. Garoni, and S. Serra-Capizzano, “Block generalized locally Toeplitz sequences: theory and applications in the unidimensional case,” Electron. Trans. Numer. Anal., vol. 53, pp. 28–112, 2020, https://doi.org/10.1553/etna_vol53s28.Suche in Google Scholar

[6] G. Barbarino, C. Garoni, and S. Serra-Capizzano, “Block generalized locally Toeplitz sequences: theory and applications in the multidimensional case,” Electron. Trans. Numer. Anal., vol. 53, pp. 113–216, 2020, https://doi.org/10.1553/etna_vol53s113.Suche in Google Scholar

[7] A. Böttcher, C. Garoni, and S. Serra-Capizzano, “Exploration of Toeplitz-like matrices with unbounded symbols is not a purely academic journey,” Sb. Math., vol. 208, pp. 1602–1627, 2017, https://doi.org/10.1070/sm8823.Suche in Google Scholar

[8] U. Grenander and G. Szegő, Toeplitz Forms and Their Applications, 2nd ed. New York, AMS Chelsea Publishing, 1984.Suche in Google Scholar

[9] F. Avram, “On bilinear forms in Gaussian random variables and Toeplitz matrices,” Probab. Theory Relat. Fields, vol. 79, pp. 37–45, 1988, https://doi.org/10.1007/bf00319101.Suche in Google Scholar

[10] S. V. Parter, “On the distribution of the singular values of Toeplitz matrices,” Linear Algebra Appl., vol. 80, pp. 115–130, 1986, https://doi.org/10.1016/0024-3795(86)90280-6.Suche in Google Scholar

[11] E. E. Tyrtyshnikov, “A unifying approach to some old and new theorems on distribution and clustering,” Linear Algebra Appl., vol. 232, pp. 1–43, 1996, https://doi.org/10.1016/0024-3795(94)00025-5.Suche in Google Scholar

[12] E. E. Tyrtyshnikov and N. L. Zamarashkin, “Spectra of multilevel Toeplitz matrices: advanced theory via simple matrix relationships,” Linear Algebra Appl., vol. 270, pp. 15–27, 1998, https://doi.org/10.1016/s0024-3795(97)80001-8.Suche in Google Scholar

[13] N. L. Zamarashkin and E. E. Tyrtyshnikov, “Distribution of eigenvalues and singular values of Toeplitz matrices under weakened conditions on the generating function,” Sb. Math., vol. 188, pp. 1191–1201, 1997, https://doi.org/10.1070/sm1997v188n08abeh000251.Suche in Google Scholar

[14] P. Tilli, “A note on the spectral distribution of Toeplitz matrices,” Linear Multilinear Algebra, vol. 45, pp. 147–159, 1998, https://doi.org/10.1080/03081089808818584.Suche in Google Scholar

[15] P. Tilli, “Some results on complex Toeplitz eigenvalues,” J. Math. Anal. Appl., vol. 239, pp. 390–401, 1999, https://doi.org/10.1006/jmaa.1999.6572.Suche in Google Scholar

[16] A. Böttcher and B. Silbermann, Introduction to Large Truncated Toeplitz Matrices, New York, Springer, 1999.10.1007/978-1-4612-1426-7Suche in Google Scholar

[17] S. Serra-Capizzano, “Generalized locally Toeplitz sequences: spectral analysis and applications to discretized partial differential equations,” Linear Algebra Appl., vol. 366, pp. 371–402, 2003, https://doi.org/10.1016/s0024-3795(02)00504-9.Suche in Google Scholar

[18] S. Serra-Capizzano, “The GLT class as a generalized Fourier analysis and applications,” Linear Algebra Appl., vol. 419, pp. 180–233, 2006, https://doi.org/10.1016/j.laa.2006.04.012.Suche in Google Scholar

[19] P. Tilli, “Locally Toeplitz sequences: spectral properties and applications,” Linear Algebra Appl., vol. 278, pp. 91–120, 1998, https://doi.org/10.1016/s0024-3795(97)10079-9.Suche in Google Scholar

[20] D. Bianchi, “Analysis of the spectral symbol associated to discretization schemes of linear self-adjoint differential operators,” Calcolo, vol. 58, 2021, Art. no. 38, https://doi.org/10.1007/s10092-021-00426-5.Suche in Google Scholar

[21] D. Bianchi and S. Serra-Capizzano, “Spectral analysis of finite-dimensional approximations of 1d waves in non-uniform grids,” Calcolo, vol. 55, 2018, Art. no. 47, https://doi.org/10.1007/s10092-018-0288-x.Suche in Google Scholar

[22] C. Garoni, H. Speleers, S.-E. Ekström, A. Reali, S. Serra-Capizzano, and T. J. R. Hughes, “Symbol-based analysis of finite element and isogeometric B-spline discretizations of eigenvalue problems: exposition and review,” Arch. Comput. Methods Eng., vol. 26, pp. 1639–1690, 2019.10.1007/s11831-018-9295-ySuche in Google Scholar

[23] B. Beckermann and A. B. J. Kuijlaars, “Superlinear convergence of conjugate gradients,” SIAM J. Numer. Anal., vol. 39, pp. 300–329, 2001, https://doi.org/10.1137/s0036142999363188.Suche in Google Scholar

[24] A. B. J. Kuijlaars, “Convergence analysis of Krylov subspace iterations with methods from potential theory,” SIAM Rev., vol. 48, pp. 3–40, 2006, https://doi.org/10.1137/s0036144504445376.Suche in Google Scholar

[25] J. M. Bogoya, A. Böttcher, S. M. Grudsky, and E. A. Maximenko, “Maximum norm versions of the Szegő and Avram–Parter theorems for Toeplitz matrices,” J. Approx. Theory, vol. 196, pp. 79–100, 2015.10.1016/j.jat.2015.03.003Suche in Google Scholar

[26] J. M. Bogoya, A. Böttcher, and E. A. Maximenko, “From convergence in distribution to uniform convergence,” Bol. Soc. Mat. Mex., vol. 22, pp. 695–710, 2016, https://doi.org/10.1007/s40590-016-0105-y.Suche in Google Scholar

[27] B. Fristedt and L. Gray, A Modern Approach to Probability Theory, Boston, Birkhäuser, 1997.10.1007/978-1-4899-2837-5Suche in Google Scholar

[28] W. Rudin, Principles of Mathematical Analysis, 3rd ed. New York, McGraw-Hill, 1976.Suche in Google Scholar

[29] G. Barbarino, D. Bianchi, and C. Garoni, “Constructive approach to the monotone rearrangement of functions,” Expo. Math., vol. 40, pp. 155–175, 2022, https://doi.org/10.1016/j.exmath.2021.10.004.Suche in Google Scholar

[30] G. Pólya and G. Szegő, Problems and Theorems in Analysis I. Series. Integral Calculus. Theory of Functions, Berlin–Heidelberg, Springer, 1998.10.1007/978-3-642-61905-2Suche in Google Scholar

[31] R. Bhatia, Matrix Analysis, New York, Springer, 1997.10.1007/978-1-4612-0653-8Suche in Google Scholar

[32] G. Barbarino, “Spectral measures,” in Structured Matrices in Numerical Linear Algebra: Analysis, Algorithms and Applications, Springer INdAM Series, vol. 30, Cham, Springer, 2019, pp. 1–24.10.1007/978-3-030-04088-8_1Suche in Google Scholar

[33] A. Böttcher and S. M. Grudsky, Spectral Properties of Banded Toeplitz Matrices, Philadelphia, SIAM, 2005.10.1137/1.9780898717853Suche in Google Scholar

[34] D. Noutsos, S. Serra-Capizzano, and P. Vassalos, “The conditioning of FD matrix sequences coming from semi-elliptic differential equations,” Linear Algebra Appl., vol. 428, pp. 600–624, 2008, https://doi.org/10.1016/j.laa.2007.08.008.Suche in Google Scholar

[35] S.-E. Ekström, I. Furci, C. Garoni, C. Manni, S. Serra-Capizzano, and H. Speleers, “Are the eigenvalues of the B-spline isogeometric analysis approximation of −Δu = λu known in almost closed form?” Numer. Linear Algebra Appl., vol. 25, 2018, Art. no. e2198.10.1002/nla.2198Suche in Google Scholar

[36] A. Klenke, Probability Theory: A Comprehensive Course, London, Springer, 2008.Suche in Google Scholar

Received: 2023-07-31
Accepted: 2025-01-24
Published Online: 2025-07-16

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Heruntergeladen am 26.11.2025 von https://www.degruyterbrill.com/document/doi/10.1515/jnma-2023-0091/html?lang=de
Button zum nach oben scrollen