Home Mathematics A standard form in (some) free fields: How to construct minimal linear representations
Article Open Access

A standard form in (some) free fields: How to construct minimal linear representations

  • Konrad Schrempf EMAIL logo
Published/Copyright: November 23, 2020

Abstract

We describe a standard form for the elements in the universal field of fractions of free associative algebras (over a commutative field). It is a special version of the normal form provided by Cohn and Reutenauer and enables the use of linear algebra techniques for the construction of minimal linear representations (in standard form) for the sum and the product of two elements (given in a standard form). This completes “minimal” arithmetic in free fields since “minimal” constructions for the inverse are already known. The applications are wide: linear algebra (over the free field), rational identities, computing the left gcd of two non-commutative polynomials, etc.

MSC 2010: 16K40; 16Z05; 16G99; 16S85; 15A22

Introduction

The embedding of the integers into the field of rational numbers is easy, whereas that of non-commutative rings (into skew fields) is not that straightforward even in special cases [1]. After Ore’s construction, it took almost 40 years for many contributors to develop a more general theory [2, Chapter 7]. The embedding of the free associative algebra (over a commutative field) into a ring of quotients (non-commutative localization) is even classified as “ugly” [3].

On the other hand, there are many parallels between the ring of integers and the free associative algebra, for example, both have a distributive factor lattice [4] or [5, Section 3.5]. And, starting from the normal form (minimal linear representation) of Cohn and Reutenauer [6] of an element in the universal field of fractions (of the free associative algebra), we formulate a standard form which can be seen as a “generalized” fraction. As a reminiscence to the work with “classical” fractions (we learn at school), we call them briefly “free fractions” [7].

For an introduction to free fields, we recommend [8, Section 9.3] and with respect to linear representations in particular [9]. For details we refer to [2, Chapter 7] or [10, Section 6.4]. Since this work is only one part in a series, further information and references can be found in [11] (linear word problem, minimal inverse), [12] (polynomial factorization) and [13] (general factorization theory).

The main idea of the standard form is simple: instead of viewing the system matrix of a linear representation as a single “block” we transform it into a block upper triangular form with smaller diagonal blocks. If these pivot blocks are small enough (called “refined”), linear techniques can be used to eliminate all superfluous block rows or columns, that is, solving “local” word problems, in the “sum” or the “product” of two linear representations, eventually yielding a minimal linear representation. This can be accomplished with complexity O ( d n 5 ) for an alphabet with d letters and a linear representation of dimension n with pivot blocks of size n . For the refinement of pivot blocks, we need to solve – at least in general – systems of polynomial equations. However, linear techniques can be used in some cases to “break” big pivot blocks into smaller ones (see Remark 5.8).

Since almost all here is rather elementary, it should be noted that it needs some efforts to dig deep enough into the theory in the background (in particular that of Cohn) to really understand what is going on. Despite the main results (mentioned in the following), there is only one non-trivial observation (formulated in Theorem 4.16): a given refined linear representation is minimal if none of the linear systems of equations for block row or column minimization has a solution. (In the general non-refined case nothing can be said about minimality if there is no more “linear” minimization step possible.)

In other words, while the normal form [6] “linearizes” the word problem in free fields [11, Section 2], the standard form “linearizes” (the minimization of) the sum and the product (of two elements).

Section 1 provides the basic setup, and Section 2 summarizes several (different) constructions of linear representations for the rational operations (sum, product and inverse). In a first reading, only the first two propositions are important. In Section 3, we develop the notation to be able to formulate a standard form in Definition 3.8. The main result is then Theorem 4.16 (or Algorithm 4.14) for the minimization in Section 4. And finally, in Section 5, some applications are mentioned. Example 5.1 can also serve as an introduction for the work (by hand) with linear representations.

The intention of this paper is to be independent of the other three papers in this series (about the free field) as far as possible and leave it to the reader, for example, to interpret a standard form of the inverse of a polynomial as its factorization (into irreducible elements). Although the idea for minimizing “polynomial” linear representations is similar to that in Section 4, [12, Algorithm 32] is only a very special case of Algorithm 4.14 and the “refinement” in the former case is trivial.

Beside the algebraic approach presented here, there are analytical methods for solving the word problem (or testing rational identities like in Example 5.1) in polynomial time by plugging in “sufficiently large” matrices [14,15]. Closely related to linear representations are realizations which can be “cut down” by plugging in operators [16] or “reduced” by plugging in matrices [17]. Yet another point of view is from invariant subspaces [18,19].

Once the rich structure of Cohn and Reutenauer’s normal form becomes “visible”, a lot can be done, transforming the rather abstract free field into an applied. One can use non-commutative “rational functions” just like rational numbers, respectively, “free fractions” just like “classical” fractions.

1 Preliminaries

We represent elements (in free fields) by admissible linear systems (Definition 1.9), which are just a special form of linear representations (Definition 1.4). Rational operations (scalar multiplication, addition, multiplication and inverse) can be easily formulated in terms of linear representations (Proposition 2.2).

Notation. The set of the natural numbers is denoted by = { 1 , 2 , } . Zero entries in matrices are usually replaced by (lower) dots to emphasize the structure of the non-zero entries unless they result from transformations where there were possibly non-zero entries before. We denote by I n the identity matrix and Σ n the permutation matrix that reverses the order of rows/columns (of size n), respectively, I and Σ if the size is clear from the context.

Let K be a commutative field, K ¯ its algebraic closure and X = { x 1 , x 2 , , x d } be a finite (non-empty) alphabet. K X denotes the free associative algebra (or free K -algebra) and F = K ( X ) its universal field of fractions (or “free field”) [9,10]. An element in K X is called (non-commutative or nc) polynomial. In our examples, the alphabet is usually X = { x , y , z } . Including the algebra of nc rational series we have the following chain of inclusions:

K K X K rat X K ( X ) F .

The free monoid X generated by X is the set of all finite words x i 1 x i 2 x i n with i k { 1 , 2 , , d } . An element of the alphabet is called letter, one of the free monoid words. The multiplication on X is the concatenation of words, that is, ( x i 1 x i m ) ( x j 1 x j n ) = x i 1 x i m x j 1 x j n , with neutral element 1, the empty word. For detailed introductions see [20, Chapter 1] or [21, Section I.1].

Definition 1.1

(Inner Rank, Full Matrix, Hollow Matrix [5,9]) Given a matrix A K X n × n , the inner rank of A is the smallest number m such that there exists a factorization A = T U with T K X n × m and U K X m × n . The matrix A is called full if m = n , non-full otherwise. It is called hollow if it contains a zero submatrix of size k × l with k + l > n .

Definition 1.2

(Associated and Stably Associated Matrices [10]) Two matrices A and B over K X (of the same size) are called associated over a subring R K X if there exist invertible matrices P , Q over R such that A = P B Q . A and B (not necessarily of the same size) are called stably associated if A I p and B I q are associated for some unit matrices I p and I q . Here by C D we denote the diagonal sum C . . D .

Lemma 1.3

[10, Corollary 6.3.6] A linear square matrix over K X which is not full is associated over K with a linear hollow matrix.

Definition 1.4

(Linear Representations, Dimension, Rank [6,9]) Let f F . A linear representation of f is a triple π f = ( u , A , v ) with u K 1 × n , full A = A 0 1 + A 1 x 1 + + A d x d , that is, A is invertible over F , A K n × n , v K n × 1 and f = u A 1 v . The dimension of π f is dim ( u , A , v ) = n . It is called minimal if A has the smallest possible dimension among all linear representations of f. The “empty” representation π = ( , , ) is the minimal one of 0 F with dim π = 0 . Let f F and π be a minimal linear representation of f. Then the rank of f is defined as rank f = dim π .

Remark

Cohn and Reutenauer defined linear representations slightly more generally, namely, f = c + u A 1 v with possibly non-zero c K and call it pure when c = 0 . Two linear representations are called equivalent if they represent the same element [9]. Two (pure) linear representations ( u , A , v ) and ( u ˜ , A ˜ , v ˜ ) of dimension n are called isomorphic if there exist invertible matrices P , Q K n × n such that u = u ˜ Q , A = P A ˜ Q and v = P v ˜ [9].

Theorem 1.5

[9, Theorem 1.4] If π = ( u , A , v ) and π = ( u , A , v ) are equivalent (pure) linear representations, of which the first is minimal, then the second is isomorphic to a representation π = ( u , A , v ) which has the block decomposition

u = . u , A = . A . . a n d v = v 1 . .

Remark

In principle, given π of dimension n, one can look for invertible matrices P , Q such that P π Q = ( u Q , P A Q , P v ) has the form of π to minimize π . However, for an alphabet with d letters and a lower left block of zeros of size k × ( n k ) we would get ( d + 1 ) k ( n k ) + k polynomial equations with at most quadratic terms and two equations of degree n (to ensure invertibility of the transformation matrices) in 2 n 2 commuting unknowns. This is already rather challenging for n = 5 . The goal is therefore to use linear techniques as far as possible.

Definition 1.6

(Left and Right Families [6]) Let π = ( u , A , v ) be a linear representation of f F of dimension n. The families ( s 1 , s 2 , , s n ) F with s i = ( A 1 v ) i and ( t 1 , t 2 , , t n ) F with t j = ( u A 1 ) j are called left family and right family, respectively. L ( π ) = span { s 1 , s 2 , , s n } and R ( π ) = span { t 1 , t 2 , , t n } denote their linear spans (over K ).

Proposition 1.7

[6, Proposition 4.7] A representation π = ( u , A , v ) of an element f F is minimal if and only if both the left family and the right family are K -linearly independent. In this case, L ( π ) and R ( π ) depend only on f.

Definition 1.8

(Element Types [13]) An element f F is called of type ( 1 , ) (respectively, ( 0 , ) ) if 1 R ( f ) , that is, 1 R ( π ) for some minimal linear representation π of f (respectively, 1 R ( f ) ). It is called of type ( , 1 ) (respectively, ( , 0 ) ) if 1 L ( f ) (respectively, 1 L ( f ) ). Both subtypes can be combined.

Remark

The following definition is a special case of Cohn’s more general admissible systems [2, Section 7] and the slightly more general linear representations [6].

Definition 1.9

(Admissible Linear Systems [11]) A linear representation A = ( u , A , v ) of f F is called admissible linear system (ALS) for f if u = e 1 = [ 1 , 0 , , 0 ] . The element f is then the first component of the (unique) solution vector s = A 1 v . An ALS is also written as A s = v , or if v = [ 0 , , 0 , λ ] as ( 1 , A , λ ) . Given a linear representation A = ( u , A , v ) of dimension n of f F and invertible matrices P , Q K n × n , the transformed P A Q = ( u Q , P A Q , P v ) is again a linear representation (of f). If A is an ALS, the transformation ( P , Q ) is called admissible if the first row of Q is e 1 = [ 1 , 0 , , 0 ] .

Remark

The left family ( A 1 v ) i (respectively, the right family ( u A 1 ) j ) and the solution vector s of A s = v (respectively, t of u = t A ) are used synonymously.

Transformations can be done by elementary row- and column operations, explained in detail in [11, Remark 1.12]. For further remarks and connections to the related concepts of linearization and realization see [11, Section 1].

For elements in the free associative algebra K X a special form (with an upper unitriangular system matrix) can be used. It plays a crucial role in the factorization of polynomials because it allows us to formulate a minimal polynomial multiplication (Proposition 2.10) and upper unitriangular transformation matrices (invertible by definition) suffice to find all possible factors (up to trivial units). For details we refer to [12, Section 2].

Remark 1.10

While it was intended in [12] to derive a “better” standard form including knowledge about the factorization it turned out later that this is not that easy in the general case because of the necessity to distinguish several cases in the multiplication [13, Section 5]. The term “pre-standard ALS” (for polynomials) is now replaced by polynomial ALS (which is just a special form of a refined ALS, Section 3). And a minimal polynomial ALS is already in a standard form. Particularly to avoid confusion with special transformation matrices for the factorization, everything is put into a uniform context in [22]. For an overview see also [7, Figure 1]. There are only some minor changes in the formulation of results and proofs necessary, for example, to construct the product in [12, Lemma 39] using λ 2 λ A 1 and λ λ 2 A 2 .

Definition 1.11

(Polynomial ALS and Transformation [12]) An ALS A = ( u , A , v ) of dimension n with system matrix A = ( a i j ) for a non-zero polynomial 0 p K X is called polynomial, if

  1. v = [ 0 , , 0 , λ ] T for some λ K and

  2. a i i = 1 for i = 1 , 2 , , n and a i j = 0 for i > j , that is, A is upper triangular.

An admissible transformation ( P , Q ) for an ALS A is called polynomial if it has the form

( P , Q ) = 1 α 1 , 2 α 1 , n 1 α 1 , n 1 α n 2 , n 1 α n 2 , n 1 α n 1 , n 1 , 1 0 0 0 1 β 2 , 3 β 2 , n 1 β n 1 , n 1 .

If additionally α 1 , n = α 2 , n = = α n 1 , n = 0 , then ( P , Q ) is called polynomial factorization transformation.

Definition 1.12

Let M = M 1 x 1 + + M d x d with M i K n × n for some n . An element in F is called regular if it has a linear representation ( u , A , v ) with A = I M , that is, A 0 = I in Definition 1.4, or equivalently, if A 0 is regular (invertible).

2 Rational operations

Usually, we want to construct minimal admissible linear systems (out of minimal ones), that is, perform “minimal” rational operations. Minimal scalar multiplication is trivial. In some special cases, minimal multiplication or even minimal addition (if two elements are disjoint [9]) can be formulated (Proposition 2.10 or [13, Theorem 5.2] and [13, Proposition 3.5]). For the minimal inverse, we have to distinguish four cases, which are summarized in Theorem 2.11. In general, we have to minimize admissible linear systems. This will be the main goal of the following sections.

Proposition 2.1

(Minimal Monomial [11, Proposition 4.1]) Let k and f = x i 1 x i 2 x i k be a monomial in K X K ( X ) . Then

A = 1 . . , 1 x i 1 1 x i 2 1 x i k 1 , 1 . 1 . 1 . 1

is a minimal polynomial ALS of dimension dim ( A ) = k + 1 .

Proposition 2.2

(Rational Operations [9]) Let 0 f , g F be given by the admissible linear systems A f = ( u f , A f , v f ) and A g = ( u g , A g , v g ) , respectively, and let 0 μ K . Then admissible linear systems for the rational operations can be obtained as follows:

The scalar multiplication μ f is given by

μ A f = ( u f , A f , μ v f ) .

The sum f + g is given by

A f + A g = [ u f . ] , A f A f u f u g . A g , v f v g .

The product fg is given by

A f A g = [ u f . ] , A f v f u g . A g , 1 . v g .

And the inverse f 1 is given by

A f 1 = [ 1 . ] , v f A f . u f , 1 . 1 .

Since we need alternative constructions (to that in Proposition 2.2) for the product, we state them here in Propositions 2.6 and 2.7. Before, we need some technical results from [11] and [12]. However, these are rearranged such that similarities become more obvious and the flexibility in applications is increased. In particular, one can prove Lemma 2.4 by applying Lemma 2.3, see [13, Lemma 3.8].

Lemma 2.3

[12, Lemma 25]. Let A = ( u , A , v ) be an ALS of dimension n 1 with K - linearly independent left family s = A 1 v and B = B 0 1 + B 1 x 1 + + B d x d with B K m × n , such that B s = 0 . Then there exists a (unique) T K m × n such that B = T A .

Lemma 2.4

(For Type (∗, 1) [11, Lemma 4.11]) Let A = ( u , A , v ) be a minimal ALS with dim A = n 2 and 1 L ( A ) . Then there exists an admissible transformation ( P , Q ) such that the last row of PAQ is [ 0 , , 0 , 1 ] and P v = [ 0 , , 0 , λ ] for some λ K .

Lemma 2.5

(For Type (1, ∗) [11, Lemma 4.12]) Let A = ( u , A , v ) be a minimal ALS with dim A = n 2 and 1 R ( A ) . Then there exists an admissible transformation ( P , Q ) such that the first column of PAQ is [ 1 , 0 , , 0 ] and P v = [ 0 , , 0 , λ ] for some λ K .

Remark

If g is of type ( , 1 ) , then, by Lemma 2.4, each minimal ALS for g can be transformed into one with the last row of the form [ 0 , , 0 , 1 ] . If g is of type ( 1 , ) , then, by Lemma 2.5, each minimal ALS for g can be transformed into one with the first column of the form [ 1 , 0 , , 0 ] . This can be done by linear techniques, see the remark before [11, Theorem 4.13].

Remark

Since p K X is of type ( 1 , 1 ) , both constructions can be used for the minimal polynomial multiplication (Proposition 2.10). The alternative proof in [13] relies in particular on Lemma 2.9. One could call the multiplication from Proposition 2.2 type ( , ) . For a discussion of minimality of different types of multiplication we refer to [13].

Proposition 2.6

(Multiplication Type (1, ∗) [13, Proposition 3.11]) Let f , g F \ K be given by the admissible linear systems A f = ( u f , A f , v f ) = ( 1 , A f , λ f ) of dimension n f of the form

A f = 1 . . , a b b a B b . . 1 , 1 . 1 . λ f

and A g = ( u g , A g , v g ) = ( 1 , A g , λ g ) of dimension n g , respectively. Then an ALS for fg of dimension n = n f + n g 1 is given by

A = 1 . . , a b λ f b u g a B λ f b u g . . A g , 1 . 1 . v g .

Proposition 2.7

(Multiplication Type (∗, 1) [13, Proposition 3.12]) Let f , g F \ K be given by the admissible linear systems A f = ( u f , A f , v f ) = ( 1 , A f , λ f ) of dimension n f and A g = ( u g , A g , v g ) = ( 1 , A g , λ g ) of dimension n g of the form

A g = 1 . . , 1 b b . B b . c c , [ 1 . 1 . λ g ] ,

respectively. Then an ALS for fg of dimension n = n f + n g 1 is given by

A = u f . . , A f e n f λ f b e n f λ f b . B b . c c , 1 . . λ g .

Remark

Note that the transformation in the following lemma is not necessarily admissible. However, except for n = 2 (which can be treated by permuting the last two elements in the left family), it can be chosen such that it is admissible. The proof is similar to that of [12, Lemma 2.4].

Lemma 2.8

[13, Lemma 3.15] Let A = ( u , A , v ) be an ALS of dimension n 2 with v = [ 0 , , 0 , λ ] and K -linearly dependent left family s = A 1 v . Let m { 2 , 3 , , n } be the minimal index such that the left subfamily s ̲ = ( A 1 v ) i = m n is K -linearly independent. Let A = ( a i j ) and assume that a i i = 1 for 1 i m and a i j = 0 for j < i m (upper triangular m × m block) and a i j = 0 for j m < i (lower left zero block of size ( n m ) × m ). Then there exists matrices T , U K 1 × ( n + 1 m ) such that

U + ( a m 1 , j ) j = m n T ( a i j ) i , j = m n = 0 0 a n d T ( v i ) i = m n = 0 .

Lemma 2.9

[13, Lemma 3.16]. Let p K X \ K and g F \ K be given by the minimal admissible linear systems A p = ( u p , A p , v p ) and A g = ( u g , A g , v g ) of dimensions n p and n g , respectively, with 1 R ( g ) . Then the left family of the admissible linear systems A = ( u , A , v ) for pg of dimension n = n p + n g 1 from Proposition 2.7 is K -linearly independent.

Proposition 2.10

(Minimal Polynomial Multiplication [12, Proposition 26]) Let 0 p , q K X be given by the minimal polynomial admissible linear systems A p = ( 1 , A p , λ p ) and A q = ( 1 , A q , λ q ) of dimension n p , n q 2 , respectively. Then the ALS A from Proposition 2.6 for pq is minimal of dimension n = n p + n q 1 .

Theorem 2.11

(Minimal Inverse [11, Theorem 4.13]) Let f F \ K be given by the minimal admissible linear system A = ( u , A , v ) of dimension n. Then a minimal ALS for f 1 is given in the following way:

f of type ( 1 , 1 ) yields f 1 of type ( 0 , 0 ) with dim ( A ) = n 1 :

A = 1 , λ Σ b Σ B Σ λ b b Σ , 1 f o r A = 1 , 1 b b . B b . . 1 , λ .

f of type ( 1 , 0 ) yields f 1 of type ( 1 , 0 ) with dim ( A ) = n :

A = 1 , 1 1 λ c 1 λ c Σ . Σ b Σ B Σ . b b Σ , 1 f o r A = 1 , 1 b b . B b . c c , λ .

f of type ( 0 , 1 ) yields f 1 of type ( 0 , 1 ) with dim ( A ) = n :

A = 1 , λ Σ b Σ B Σ Σ a λ b b Σ a . . 1 , 1 f o r A = 1 , a b b a B b . . 1 , λ .

f of type ( 0 , 0 ) yields f 1 of type ( 1 , 1 ) with dim ( A ) = n + 1 :

A = 1 , Σ v Σ A Σ . u Σ , 1 .

(Recall that the permutation matrix Σ reverses the order of rows/columns.)

Corollary 2.12

Let p K X with rank p = n 2 . Then rank ( p 1 ) = n 1 .

Corollary 2.13

Let 0 f F . Then f K if and only if rank ( f ) = rank ( f 1 ) = 1 .

Remark

Note that n 2 for type ( 1 , 1 ) , ( 1 , 0 ) and ( 0 , 1 ) . The block B is always square of size n 2 . For n = 2 , the system matrix of A is

1 b . 1 for type ( 1 , 1 ) ,   1 b . c for type ( 1 , 0 )  and   a b . 1 for type ( 0 , 1 ) .

3 A standard form

After providing a “language” to be able to formulate “operations” on the system matrix of an admissible linear system, we can define a standard form (Definition 3.8). A standard ALS will be minimal and (has) refined (pivot blocks). In the following section, we minimize a refined ALS. This is somewhat technical to describe but simple (we need to solve linear systems of equations). At the end of this section, we illustrate how to refine pivot blocks. To the contrary, this is easy to describe but in general very difficult to accomplish (we need to solve polynomial systems of equations). Since the latter is closely related to factorization, we refer to [12] and [13] for further information.

Definition 3.1

(Pivot Blocks, Block Transformation) Let A = ( u , A , v ) be an admissible linear system and denote A = ( A i j ) i , j = 1 m the block decomposition (with square diagonal blocks A i i ) with maximal m such that A i j = 0 for i > j . The diagonal blocks A i i are called pivot blocks, the number m is denoted by # pb ( A ) = # pb ( A ) . The dimension (or size) of a pivot block A i i for i { 1 , 2 , , m } is dim i ( A ) . For i < 1 or i > m let dim i ( A ) = 0 . A (admissible) transformation ( P , Q ) is called (admissible) block transformation (for A ) if P i j = Q i j = 0 for i > j (with respect to the block structure of A ).

Notation

Let A = ( u , A , v ) be an ALS with m pivot blocks of size n i = dim i ( A ) . We denote by n i : j = dim i : j ( A ) = n i + n i + 1 + + n j the sum of the sizes of the pivot blocks A i i to A j j (with the convention n i : j = 0 for j < i ). For a given system, the identity matrix of size n i : j is denoted by I i : j . If ( P , Q ) is an admissible transformation for A , then ( P A Q ) i j denotes the (to the block decomposition of A corresponding) block ( i , j ) of size n i × n j in PAQ, ( P v ) i ̲ that of size n i × 1 in Pv and ( u Q ) j ̲ that of size 1 × n j in uQ.

Notation

Components in the left family s = A 1 v are (as usual) denoted by s i . The jth component for 1 j dim i ( A ) of the ith block for 1 i # pb ( A ) is denoted by s i ̲ ( j ) . A subfamily of s with respect to the pivot block k is denoted by s k ̲ , s i : j ̲ = ( s i ̲ , s i + 1 ̲ , , s j ̲ ) . Analogous is used for the right family t = u A 1 .

Notation

A “grouping” of pivot blocks { i , i + 1 , , j } of the system matrix is denoted by A i : j , i : j . If it is clear from the context where the block ends (respectively, starts), we write A i : , i : (respectively, A : j , : j ), in particular with respect to a given pivot block. For example, A 1 : , 1 : , A k , k and A : m , : m .

Definition 3.2

(Admissible Pivot Block Transformation) Let A = ( u , A , v ) be an ALS with m = # pb ( A ) pivot blocks of size n i = dim i ( A ) . An admissible transformation ( P , Q ) of the form ( I 1 : k 1 T ¯ I k + 1 : m , I 1 : k 1 U ¯ I k + 1 : m ) with T ¯ , U ¯ K n k × n k is called admissible for pivot block k or (admissible) k-th pivot block transformation.

Definition 3.3

(Refined Pivot Block and Refined ALS) Let A = ( u , A , v ) be an ALS with m = # pb ( A ) pivot blocks of size n i = dim i ( A ) . A pivot block A k k (for 1 k m ) is called refined if there does not exist an admissible pivot block transformation ( P , Q ) k such that ( P A Q ) k k has a lower left block of zeros of size i × ( n k i ) for an i { 1 , 2 , , n k 1 } . The admissible linear system A is called refined if all pivot blocks are refined.

Remarks

That the “form” of a refined ALS is not unique, it will be illustrated in the following example. Moreover, a refined ALS is not necessarily refined over K ¯ : Let

A = 1 . , 1 x 2 x 1 , 1 . 1 .

Adding 2 -times row 1 to row 2 and subtracting 2 -times column 2 of column 1 yields

1 2 x x 0 1 + 2 x s = 1 . 1 .

See also [12, Example 37].

Example 3.4

Let f = ( y 1 x ) 1 be given by the minimal ALS

A f = 1 . , 1 y x 1 , 1 . 1 .

For f + 3 z , we have the (minimal) ALS

A = 1 . . . , 1 y 1 . x 1 x . . . 1 z . . . 1 , 1 . 1 1 . 3 .

Since 1 R ( A ) and A is constructed by the addition in Proposition 2.2, it can easily be transformed – in a controlled way – into another ALS with refined pivot block structure. First, we add column 3 to column 1,

A = 1 . . . , 0 y 1 . 0 1 x . 1 . 1 z . . . 1 , 1 . 1 1 . 3 ,

and then we exchange rows 1 and 3:

A = 1 . . . , 1 . 1 z . 1 x . . y 1 . . . . 1 , 1 . 1 . 3 .

Definition 3.5

(Block Decomposition of an ALS, Block Row and Column Transformation) Let A = ( u , A , v ) be an ALS of dimension n with m = # pb ( A ) 2 pivot blocks of size n i = dim i ( A ) . For 1 k m , the block decomposition with respect to the pivot block k is the system

A [ k ̲ ] = u 1 : ̲ . . , A 1 : , 1 : A 1 : , k A 1 : , : m . A k , k A k , : m . . A : m , : m , v 1 : ̲ v k ̲ v : 3 ̲

with (square) diagonal blocks A 1 : , 1 : , A k , k and A : m , : m of size n 1 : k 1 , n k respectively n k + 1 : m . ( k ̲ is used here to emphasize that k is a block index.) By A [ k ̲ ] we denote the ALS A [ k ̲ ] without block row/column k of dimension n n k (not necessarily equivalent to A [ k ̲ ] ):

A [ k ̲ ] = u 1 ̲ . , A 1 : , 1 : A 1 : , : m . A : m , : m , v 1 ̲ v 3 ̲ .

An admissible transformation ( P , Q ) k ̲ = ( P ( T ¯ , T ) , Q ( U ¯ , U ) ) k ̲ of the form

(3.6) ( P , Q ) k ̲ = I 1 : k 1 . . . T ¯ T . . I k + 1 : m , I 1 : k 1 . . . U ¯ U . . I k + 1 : m

is called kth block row transformation for A [ k ̲ ] , one of the form ( P ( T ¯ , T ) , Q ( U ¯ , U ) ) k ̲ ,

(3.7) ( P , Q ) k ̲ = I 1 : k 1 T . . T ¯ . . . I k + 1 : m , I 1 : k 1 U . . U ¯ . . . I k + 1 : m

is called kth block column transformation for A [ k ̲ ] . For T ¯ = U ¯ = I n k we write also P ( T ) , respectively, Q ( U ) and call the block transformation particular.

Definition 3.8

(Standard Admissible Linear System) A minimal and refined ALS A = ( u , A , v ) = ( 1 , A , λ ) , that is, v = [ 0 , , 0 , λ ] , is called standard.

Remark

For a polynomial p given by a standard ALS A (of dimension n 2 ), the minimal inverse of A (of dimension n 1 ) is refined if and only if A is obtained by the minimal polynomial multiplication of its irreducible factors q i in p = q 1 q 2 q m . For a detailed discussion about the factorization of polynomials (in free associative algebras) we refer to [12]. One of the simplest non-trivial examples is

p = x ( 1 y x ) = x x y x = ( 1 x y ) x .

With respect to the general factorization theory it is open to show that the free field is a “similarity unique factorization domain” [13].

Example 3.9

(Factorization versus Refinement) Later, in Example 5.1 (Hua’s identity), we need to refine a pivot block. Although the necessary transformations are obvious, the procedure should be illustrated in a systematic way. But first we have a look on how this 2 × 2 block appears, namely, by inverting the element given by the ALS

A = 1 . . , x 0 1 . 1 y . x 1 , 0 . 1 .

If we exchange columns 2 and 3, it is immediate that this is the “product” of the admissible linear systems

A 1 = 1 , x , 1 and A 2 = 1 . , y 1 1 x , 1 . 1 .

Applying the minimal inverse on

A = 1 . . , x 1 0 . y 1 . 1 x , 0 . 1 ,

we get a refined (and minimal) ALS, namely,

A = 1 . . . , 1 x 1 1 . . 1 y . . . 1 x . . . 1 , 1 . 1 . 1 . 1 .

The factorization here is really simple.

Example 3.10

(Pivot Block Refinement) Now we focus on the refinement of the second pivot block in the ALS

A = 1 . . . , 1 1 x . . y 1 . . 1 0 x . . . 1 , 1 . 1 . 1 . 1 .

(This is the ALS (5.3) with the first three rows scaled by 1 .) We are looking for an admissible transformation ( P , Q ) of the form

( P , Q ) = 1 . . . . α 2 , 2 α 2 , 3 . . α 3 , 2 α 3 , 3 . . . . 1 , 1 . . . . β 2 , 2 β 2 , 3 . . β 3 , 2 β 3 , 3 . . . . 1 .

In particular, these matrices P and Q have to be invertible, that is, we need the conditions det ( P ) 0 and det ( Q ) 0 . To create a lower left 1 × 1 block of zeros in ( P A Q ) 2 , 2 , we need to solve the following polynomial system of equations (with commuting unknowns α i j and β i j ):

α 2 , 2 α 3 , 3 α 2 , 3 α 3 , 2 = 1 , β 2 , 2 β 3 , 3 β 2 , 3 β 3 , 2 = 1 , α 3 , 2 β 3 , 2 + α 3 , 3 β 2 , 2 = 0 for 1 and α 3 , 2 β 2 , 2 = 0 for y .

We obtain the last two equations by multiplication of the transformation blocks with the corresponding coefficient matrices of the pivot blocks (irrelevant equations are marked with “ ” on the right hand side)

α 2 , 2 α 2 , 3 α 3 , 2 α 3 , 3 . 1 1 . β 2 , 2 β 2 , 3 β 3 , 2 β 3 , 3 = 0 for 1 and α 2 , 2 α 2 , 3 α 3 , 2 α 3 , 3 1 . 1 . . β 2 , 2 β 2 , 3 β 3 , 2 β 3 , 3 = 0 for y .

To solve this system of polynomial equations, Gröbner-Shirshov bases (Bokut and Kolesnikov wrote the nice survey [23] about these bases going back to the 1960s) can be used. For detailed discussions, we refer to [24] or [25]. The basic idea comes from [9, Theorem 4.1] and is also used in [12, Proposition 42] and [13, Section 5]. For further remarks on the refinement of pivot blocks see also [7, Section 4.4].

4 Minimizing a refined ALS

First of all, we derive the left, respectively, right block minimization equations. For that we consider an admissible linear system A = ( u , A , v ) of dimension n with m = # pb ( A ) 2 pivot blocks of size n i = dim i ( A ) . For 1 k < m , we transform this system using the block row transformation ( P , Q ) = ( P ( T ¯ , T ) , Q ( U ¯ , U ) ) k ̲ , namely,

P A Q = I 1 : k 1 . . . T ¯ T . . I k + 1 : m A 1 : , 1 : A 1 : , k A 1 : , : m . A k , k A k , : m . . A : m , : m I 1 : k 1 . . . U ¯ U . . I k + 1 : m = A 1 : , 1 : A 1 : , k A 1 : , : m . T ¯ A k , k T ¯ A k , : m + T A : m , : m . . A : m , : m I 1 : k 1 . . . U ¯ U . . I k + 1 : m = A 1 : , 1 : A 1 : , k U ¯ A 1 : , k U + A 1 : , : m . T ¯ A k , k U ¯ T ¯ A k , k U + T ¯ A k , : m + T A : m , : m . . A : m , : m and P v = I 1 : k 1 . . . T ¯ T . . I k + 1 : m v 1 : ̲ v k ̲ v : m ̲ = v 1 : ̲ T ¯ v k ̲ + T v : m ̲ v : m ̲ .

Now we can read off a sufficient condition for ( Q 1 s ) k ̲ = 0 n k × 1 , namely, the existence of matrices T , U K n k × n k + 1 : m and invertible matrices T ¯ , U ¯ K n k × n k such that

T ¯ A k , k U + T ¯ A k , : m + T A : m , : m = 0 n k × n k + 1 : m and T ¯ v k ̲ + T v : m ̲ = 0 n k × 1 .

Since T ¯ is invertible (as a diagonal block of an invertible matrix P), this condition is equivalent to the existence of matrices T , U K n k × n k + 1 : m such that

(4.1) A k , k U + A k , : m + T ¯ 1 T T A : m , : m = 0 n k × n k + 1 : m and v k ̲ + T ¯ 1 T T v : m ̲ = 0 n k × 1 .

Applying the block column transformation ( P , Q ) = ( P ( T ¯ , T ) , Q ( U ¯ , U ) ) k ̲ , we obtain

P A Q = I 1 : k 1 T . . T ¯ . . . I k + 1 : m A 1 : , 1 : A 1 : , k A 1 : , : m . A k , k A k , : m . . A : m , : m I 1 : k 1 U . . U ¯ . . . I k + 1 : m = A 1 : , 1 : A 1 : , k + T A k , k A 1 : , : m + T A k , : m . T ¯ A k , k T ¯ A k , : m . . A : m , : m I 1 : k 1 U . . U ¯ . . . I k + 1 : m = A 1 : , 1 : A 1 : , 1 : U + A 1 : , k U ¯ + T A k , k U ¯ A 1 : , : m + T A k , : m . T ¯ A k , k U ¯ T ¯ A k , : m . . A : m , : m

and therefore a sufficient condition for ( t P 1 ) k ̲ = 0 1 × n k , namely, the existence of matrices T , U K n 1 : k 1 × n k such that

(4.2) A 1 : , 1 : U U ¯ 1 U + A 1 : , k + T A k , k = 0 n 1 : k 1 × n k .

Remark

A variant of the linear system of Eq. (4.1) also appears in [11, Lemma 2.3] and [11, Theorem 2.4] (linear word problem).

Remark 4.3

(Extended ALS) In some cases, it is necessary to use an extended ALS to be able to apply all necessary left minimization steps, for example, for f 1 f if f is of type ( 1 , 1 ) . Let A = ( u , A , v ) = ( 1 , A , λ ) be an ALS with m = # pb ( A ) 2 pivot blocks and k = 1 . The “extended” block decomposition is then (the block row A 1 : , 1 : vanishes)

A [ k ̲ ] = 1 . . , 1 A 0 , k . . A k , k A k , : m . . A : m , : m , 1 . 1 . v : m ̲

with A 0 , k = [ 1 , 0 , , 0 ] . The first row in A [ 1 ̲ ] is only changed indirectly (via admissible column operations) and stays scalar. Therefore, it can be removed easily (if necessary). This is illustrated in Example 4.5.

Notation

Given an admissible linear system A , by A ˜ = A [ + 0 ] we denote the (to A equivalent) extended ALS. Conversely, A ˜ [ 0 ] = ( A [ + 0 ] ) [ 0 ] = A . The additional row and column are indexed by 0. If A ˜ is transformed admissibly, A ˜ [ 0 ] is an ALS.

Definition 4.4

(Minimization Equations and Transformations) Let A = ( u , A , v ) be an ALS of dimension n with m = # pb ( A ) 2 pivot blocks of size n i = dim i ( A ) . For k { 1 , 2 , , m 1 } , Eq. (4.1),

A k , k U + A k , : m + T A : m , : m = 0 n k × n k + 1 : m and v k ̲ + T v : m ̲ = 0 n k × 1

with respect to the block decomposition A [ k ̲ ] and the particular block row transformation ( P ( T ) , Q ( U ) ) k ̲ are called left block minimization equations. They are denoted by k ̲ = k ̲ ( A ) . A solution by the block row pair ( T , U ) is denoted by k ̲ ( T , U ) = 0 . For k { 2 , 3 , , m } , Eq. (4.2),

A 1 : , 1 : U + A 1 : , k + T A k , k = 0 n 1 : k 1 × n k

with respect to the block decomposition A [ k ̲ ] and the particular block column transformation ( P ( T ) , Q ( U ) ) k ̲ are called right block minimization equations. They are denoted by k ̲ = k ̲ ( A ) . A solution by the block column pair ( T , U ) is denoted by k ̲ ( T , U ) = 0 .

In the following example, we have a close look on the role of the factorization and how to avoid the use of possibly non-linear techniques. All the steps are explained in detail and correspond (with exception of the solution of the linear systems of equations) to that in the following algorithm.

Example 4.5

For f = x 1 ( 1 x y ) 1 and g = x , we consider h = f g = ( 1 y x ) 1 given by the (non-minimal) ALS (constructed by Proposition 2.7)

A = ( u , A , v ) = 1 . . . , x 1 . . . y 1 . . 1 x x . . . 1 , 1 . 1 . 1 . 1 ,

whose pivot blocks are refined. Here there exists an admissible transformation (with T = 0 , U = 1 and invertible blocks T ¯ , U ¯ K 3 × 3 )

( P , Q ) = 1 0 0 . 0 1 0 . 1 0 1 T . . . 1 , 1 0 0 . 0 1 0 . 1 0 1 U . . . 1 ,

which yields the ALS

P A Q = A = 1 . . . , x 1 . . 1 y 1 1 0 0 x 0 . . . 1 , 1 . 1 . 1 . 1

in which one can eliminate row 3 and column 3 (and – after an appropriate row operation – also the last row and column).

However, minimization can be accomplished much easier: first we observe that the left subfamily s 2 : 3 ̲ (of A ) is K -linearly independent. Also, the right subfamily t 1 : 2 ̲ is K -linearly independent. For the left family with respect to the first pivot block, we consider the extended ALS (see also Remark 4.3)

1 1 . . . . x 1 . . . . y 1 . . . 1 x x . . . . 1 s = 1 . 1 . 1 . 1 . 1

of A , where the upper row and the left column are indexed by zero. Now we add row 3 to row 1, subtract column 1 from column 3 and add column 1 to column 4 (this transformation can be found by solving a linear system of equations):

1 1 . 1 1 . x 0 0 0 . . y 1 . . . 1 x x . . . . 1 s = 1 . 0 1 . 1 . 1 .

Now we can remove row 1 and column 1:

(4.6) 1 . 1 1 . y 1 . . 1 x x . . . 1 s = 1 . 1 . 1 . 1 .

Before we do the last (right) minimization step, we transform the extended ALS back into a “normal” by exchanging columns 1 and 2, scaling the (new) column 1 by 1 and subtract it from column 3:

1 1 . 0 . 1 y 1 . x 1 0 . . . 1 s = 1 . 1 . 1 . 1 .

(If necessary right minimization steps can be performed until one reaches a pivot block with the corresponding non-zero entry in row 0.) Now we can remove row 0 and column 0 again. The last step to a minimal ALS for f g = ( 1 y x ) 1 is trivial:

1 y 0 x 1 0 . . 1 s = 1 1 . 1 .

After removing the last row and column, we exchange the two rows to get a standard ALS.

Now at least one question should have appeared: How can one prove – for a given block index k – the K -linear independence of the left (respectively, right) subfamily s k : m ̲ (respectively, t 1 : k ̲ ) in general, assuming that s k + 1 : m ̲ (respectively, t 1 : k 1 ̲ ) is K -linearly independent? For an answer some preparation is necessary.

Lemma 4.7

[9, Lemma 1.2]. Let f F given by the linear representation π f = ( u , A , v ) of dimension n. Then f = 0 if and only if there exist invertible matrices P , Q K n × n such that

P π f Q = u ˜ 1 ̲ 0 , A ˜ 1 , 1 0 A ˜ 2 , 1 A ˜ 2 , 2 , 0 v ˜ 2 ̲

for square matrices A ˜ 1 , 1 and A ˜ 2 , 2 .

Theorem 4.8

(Left Block Minimization) Let A = ( u , A , v ) = ( 1 , A , λ ) be an ALS of dimension n with m = # pb ( A ) 2 pivot blocks of size n i = dim i ( A ) . Let k { 1 , 2 , , m 1 } such that the left subfamily s k + 1 : m ̲ with respect to the block decomposition A [ k ̲ ] is K -linearly independent while s k : m ̲ is K -linearly dependent. Then there exists a block row transformation ( P , Q ) = ( P ( T ¯ , T ) , Q ( U ¯ , U ) ) k ̲ , such that A ˜ = P A Q has the form

(4.9) A 1 : , 1 : A ˜ 1 : , k A ˜ 1 : , k A ˜ 1 : , : m . A ˜ k , k 0 0 . A ˜ k , k A ˜ k , k A ˜ k , : m . . . A : m , : m s ˜ 1 : ̲ 0 s ˜ k ̲ s : m ̲ = 1 . 1 . 1 . v : m ̲ .

If the pivot block A k , k is refined, then there exists a particular block row transformation ( P , Q ) = ( P ( T ) , Q ( U ) ) k ̲ , such that the left block minimization equations

A k , k U + A k , : m + T A : m , : m = 0 n k × n k + 1 : m a n d v k ̲ + T v : m ̲ = 0 n k × 1

are fulfilled.

Proof

We refer to the block decomposition

A [ k ̲ ] = [ u 1 : ̲ . . ] , A 1 : , 1 : A 1 : , k A 1 : , : m . A k , k A k , : m . . A : m , : m , 1 . 1 . v : m ̲ .

Due to the K -linear independence of the left subfamily s k + 1 : m ̲ , there exists an invertible matrix Q ˜ with blocks U ¯ ° K n k × n k and U K n k × n k + 1 : m , such that ( Q ˜ 1 s ) n 1 : k 1 + 1 = 0 , that is, the first component in s k ̲ can be eliminated. Let

A = A k , k A k , : m . A : m , : m U ¯ U . I k + 1 : m and v = 1 . v : m ̲ .

Then A = ( u , A , v ) is an ALS for 0 F and we can apply Lemma 4.7 to get a transformation

( P , Q ) = T ¯ T T : m , k T : m , : m , U ¯ U U : m , k U : m , : m

such that P A Q has the respective upper right block of zeros and – without loss of generality –  P v = v . Clearly, we can choose U : m , : m = I k + 1 : m . And since s : m ̲ is K -linearly independent, the block of zeros in

P A = T ¯ T T : m , k T : m , : m A k , k A k , : m . A : m , : m = T ¯ A k , k T ¯ A k , : m + T A : m , : m T : m , k A k , k T : m , k A k , : m + T : m , : m A : m , : m

is independent of T : m , k and T : m , : m , thus we can choose T : m , k = 0 and T : m , : m = I k + 1 : m . Now it is obvious that the columns in the lower left block of

P A Q = T ¯ A k , k T ¯ A k , : m + T A : m , : m . A : m , : m U ¯ U U : m , k I = T ¯ A k , k U ¯ + ( T ¯ A k , : m + T A : m , : m ) U : m , k T ¯ A k , k U + T ¯ A k , : m + T A : m , : m A : m , : m U : m , k A : m , : m

are linear combinations of the columns of A : m , : m and therefore we can assume that U : m , k = 0 . Now let U ¯ = U ¯ U ¯ and U = U ¯ U + U . Then P A Q has the desired form (4.9) for the (for k > 1 admissible) block row transformation

( P , Q ) = I 1 : k 1 . . . T ¯ T . . I k + 1 : m , I 1 : k 1 . . . U ¯ U . . I k + 1 : m .

For the second part, we first have to show that each component in s k ̲ can be eliminated by a linear combination of components of s k + 1 : m ̲ , that is, n k = 0 . We assume to the contrary that n k > 0 . But then – by (4.9) –  T ¯ A k , k U ¯ would have an upper right block of zeros of size ( n k n k ) × n k and therefore (after an appropriate permutation) a lower left, contradicting the assumption on a refined pivot block. Hence, there exists a matrix U K n k × n k + 1 : m such that s k ̲ U [ s k + 1 ̲ , , s m ̲ ] = 0 . By assumption v k ̲ = 0 . Now we can apply – as in Lemma 2.8 – Lemma 2.3 with the ALS ( 1 , A : m , : m , λ ) and B = A k , k U A k , : m (and s : m ̲ ). Thus, there exists a matrix T K n k × n k + 1 : m fulfilling A k , k U + A k , : m + T A : m , : m = 0 . Since the last column of T is zero, we also have T v : m ̲ = 0 . With T ¯ = U ¯ = I 1 : n k the transformation ( P , Q ) is the appropriate particular block row transformation.□

Remark

For the proof of the second part of the theorem, one can use alternatively Lemma 4.7, which is more powerful but with respect to the use of linear techniques not that obvious.

Remark

Note that the left subfamily ( s ˜ k ̲ , s : m ̲ ) is not necessarily K -linearly independent. If necessary, one can apply the theorem again after removing block row and column k .

Remark

For k = 1 , if necessary, one must use an extended ALS, see Remark 4.3.

Remark 4.10

Assuming K -linear independence of the left subfamily s k + 1 : m ̲ and a refined pivot block A k , k , the second part of the previous theorem means nothing less than the possibility to check K -linear (in-)dependence of the left subfamily s k : m ̲ by linear techniques!

Theorem 4.11

(Right Block Minimization) Let A = ( u , A , v ) = ( 1 , A , λ ) be an ALS of dimension n with m = # pb ( A ) 2 pivot blocks of size n i = dim i ( A ) . Let k { 2 , 3 , , m } such that the right subfamily t 1 : k 1 ̲ with respect to the block decomposition A [ k ̲ ] is K -linearly independent, while t 1 : k ̲ is K -linearly dependent. Then there exists a block column transformation ( P , Q ) = ( P ( T ¯ , T ) , Q ( U ¯ , U ) ) k ̲ , such that A ˜ = P A Q has the form

(4.12) u 1 ̲ . . . = t 1 ̲ t ˜ 2 ̲ 0 t ˜ 3 ̲ A 1 , 1 A ˜ 1 , 2 0 A ˜ 1 , 3 . A ˜ 2 , 2 0 A ˜ 2 , 3 . A ˜ 2 , 2 A ˜ 2 , 2 A ˜ 2 , 3 . . . A 3 , 3 .

If the pivot block A k , k is refined, then there exists a particular block column transformation ( P , Q ) = ( P ( T ) , Q ( U ) ) k ̲ , such that the right block minimization equations

A 1 : , 1 : U + A 1 : , k + T A k , k = 0 n 1 : k 1 × n k

are fulfilled.

If one uses alternating left and right block minimization steps for the minimization, that is, applying Theorems 4.8 and 4.11, one has to take care that the K -linear independence of the respective other subfamily is guaranteed. This is illustrated in the following example.

Example 4.13

Let A = ( u , A , v ) = ( 1 , A , λ ) be an ALS with m = 5 pivot blocks. For k = 2 , we assume that the left subfamily s k + 1 : m ̲ is K -linearly independent and we assume further that there exists a particular block row transformation ( P , Q ) , such that the left block minimization equations are fulfilled, that is, P A Q has the form

A 1 , 1 A 1 , 2 A ˜ 1 , 3 A ˜ 1 , 4 A ˜ 1 , 5 . A 2 , 2 0 0 0 . . A 3 , 3 A 3 , 4 A 3 , 5 . . . A 4 , 4 A 4 , 5 . . . . A 5 , 5 s ˜ 1 ̲ 0 s 3 ̲ s 4 ̲ s 5 ̲ = 1 . 0 1 . 1 . v 5 ̲ .

If the right subfamily t 1 : 3 ̲ is K -linearly independent, this is not necessarily the case for the right subfamily t 1 : 3 ̲ of the smaller ALS A = ( P A Q ) [ k ̲ ] . That is, one has to apply Theorem 4.11 on A with k = 3 to check that “again.”

Algorithm 4.14

(Minimizing a refined ALS)

Input: A = ( u , A , v ) = ( 1 , A , λ ) refined ALS (for an element f) with m = # p b ( A ) 2 pivot blocks of size n i = dim i ( A ) and K -linearly independent subfamilies s m ̲ and t 1 ̲ .
Output: A = ( , , ) , if f = 0 , or
a minimal refined ALS A = ( u , A , v ) = ( 1 , A , λ ) , if f 0 .

1: k 2

2: while k # pb ( A ) do

3:   m # pb ( A )

4:   k m + 1 k

  Is the left subfamily ( s k ̲ , s k + 1 ̲ , , s m ̲ lin . independent ) K -linearly dependent?

5:  if T , U K n k × n k + 1 : m admissible : k ̲ ( A ) = k ̲ ( T , U ) = 0 then

6:   if k = 1 then

7:    return ( , , )

   endif

8:    A ( P ( T ) A Q ( U ) ) [ k ̲ ]

9:   if k > max { 2 , m + 1 2 } then

10:     k k 1

   endif

11:   continue

  endif

12:  if k = 1 and T , U K n k × n k + 1 : m : k ̲ ( A [ + 0 ] ) = k ̲ ( T , U ) = 0 then

13:    A ˜ ( P ( T ) A [ + 0 ] Q ( U ) ) [ k ̲ ]

14:    A A ˜ [ 0 ]

15:   if k > max { 2 , m + 1 2 } then

16:     k k 1

   endif

17:  continue

  endif

  Is the right subfamily ( t 1 ̲ , , t k 1 ̲ lin . independent , t k ̲ ) K -linearly dependent?

18:  if T , U K n k 1 : m × n k admissible : k ̲ ( A ) = k ̲ ( T , U ) = 0 then

19:    A ( P ( T ) A Q ( U ) ) [ k ̲ ]

20:   if k > max { 2 , m + 1 2 } then

21:      k k 1

    endif

22:   continue

   endif

23:   k k + 1

   done

24: return P A , with P, such that P v = [ 0 , , 0 , λ ]

Proof

The admissible linear system A represents f = 0 if and only if s 1 = ( A 1 v ) 1 = 0 . Since all systems are equivalent to A , this case is recognized for k = 1 because by Theorem 4.8 there is an admissible transformation such that the first left block minimization equation is fulfilled. Now assume f 0 . We have to show that both the left family s and the right family t of A = ( u , A , v ) are K -linearly independent, respectively. Let m = # pb ( A ) and for k { 1 , 2 , , m } denote by

s ( k ) = ( s m + 1 k ̲ , s m + 2 k ̲ , , s m ̲ ) and t ( k ) = t 1 ̲ , t 2 ̲ , , t k ̲

the left and the right subfamily, respectively. By assumption s ( 1 ) and t ( 1 ) are K -linearly independent, respectively. The loop starts with k = 2 . Only if both s ( k ) and t ( k ) are K -linearly independent, respectively, k is incremented. Otherwise, a left (Theorem 4.8) or a right (Theorem 4.11) minimization step was successful and the dimension of the current ALS is strictly smaller than that of the previous. Hence, since k is bounded from below, the algorithm stops in a finite number of steps. (How row 0 and column 0 are removed from the extended ALS in line 14 is illustrated in Example 4.5.) All transformations are such that A is a refined ALS (and therefore in a standard form). For # pb ( A ) = 1 a priori only the left (or the right) family is K -linearly independent (by assumption). But if that were not the case for the respective other family, then the assumption on refined pivot blocks would be contradicted by Theorem 4.11 (respectively, Theorem 4.8).□

Remark

Concerning details with respect to the complexity of such an algorithm we refer to [12, Remark 33]. Let d be the number of letters in our alphabet X. For m = n pivot blocks and k < n we have 2 ( k 1 ) unknowns. By Gaussian elimination one gets complexity O ( d n 3 ) for solving a linear system for a minimization step, see [25, Section 2.3]. To build such a system and working on a linear matrix pencil 0 u v A with d + 1 square coefficient matrices of size n + 1 (transformations, etc.) has complexity O ( d n 2 ) . So we get overall (minimization) complexity O ( d n 4 ) . The algorithm of Cardon and Crochemore [27] has complexity O ( d n 3 ) but works only for regular elements, that is, rational formal power series. For dim i ( A ) n , we get complexity O ( d n 5 ) and for the word problem [11] with m = 2 we get complexity O ( d n 6 ) .

Remark

It is clear that one can adapt the algorithm slightly if the input ALS is constructed by Proposition 2.2 out of two minimal admissible linear systems (for the sum and the product) in the standard form.

Remark

The solution of the word problem for two elements given by minimal admissible linear systems is independent of their refinement. If Algorithm 4.14 is applied to an ALS of which it is not known if it is refined, in some cases it is possible to check if the ALS A is minimal, for example, if dim ( A ) = # pb ( A ) . If the pivot blocks are bigger but the right upper structure is “finer” one can instead – for f 0  – try to minimize the inverse ( A ) 1 . In concrete situations, there might be other possibilities to reach minimality.

Remark 4.15

Apart from minimization, this algorithm can be used to check if f is a left factor of an element fg, which is relevant for the minimal factor multiplication [13, Theorem 5.2]. And one can check if two elements are disjoint which is important for the primary decomposition [9, Theorem 2.3].

Another aspect of Algorithm 4.14 (respectively, Theorems 4.8 and 4.11) becomes visible immediately with Proposition 1.7 and Remark 4.10. The importance of the following theorem becomes clear if one needs to check K -linear (in-)dependence of an arbitrary family ( f 1 , f 2 , , f n ) over the free field and it is not possible (anymore) to take a representation as formal power series (with coefficients over K ).

Theorem 4.16

(“Linear” Characterization of Minimality) A refined admissible linear system A = ( 1 , A , λ ) for an element in the free field F with m = # pb ( A ) 2 pivot blocks and λ 0 is minimal if and only if neither the left block minimization equations k ̲ ( A ) for k { 1 , 2 , , m 1 } nor the right block minimization equations k ̲ ( A ) for k { 2 , 3 , , m } admit a solution.

Proof

From the existence of a solution non-minimality follows immediately since in this case – after the appropriate transformation – rows and columns can be removed. And for non-minimality Proposition 1.7 implies that either the left or the right family is K -linearly dependent. Without loss of generality assume that it is the left s = ( s 1 ̲ , s 2 ̲ , , s m ̲ ) with minimal k { 1 , 2 , , m 1 } such that the left subfamily ( s k + 1 ̲ , , s m ̲ ) is K -linearly independent. Since the pivot blocks are refined, Theorem 4.8 implies the existence of a particular block row transformation ( P , Q ) k ̲ and therefore a solution of the k-th left block minimization equations.□

5 Applications

Since the focus of this work is mainly minimization and one dedicated to “minimal” rational operations – collecting all techniques for practical application – is already available [7], only two applications are illustrated in the following examples. For other applications see also Remark 4.15.

Example 5.1

[Hua’s Identity, 28] We have:

(5.2) x ( x 1 + ( y 1 x ) 1 ) 1 = x y x .

Proof

Minimal admissible linear systems for y 1 and x are

y s = 1 and 1 x . 1 s = 1 . 1 ,

respectively. The ALS for the difference y 1 x ,

y y . . 1 x . . 1 s = 1 1 . 1 , s = y 1 x x 1 , t = y 1 1 y 1 x ,

is minimal because the left family s is K -linearly independent and the right family t is K -linearly independent (Proposition 1.7). Clearly, we have 1 R ( y 1 x ) . Thus, by Lemma 2.5, there exists an admissible transformation

( P , Q ) = . 1 . 1 . 1 . . 1 , 1 . . 1 1 . . . 1 ,

which yields the ALS

1 1 x . y 1 . . 1 s = 1 . 1 . 1 .

Now we can apply the inverse of type ( 1 , 1 ) :

1 y x 1 s = 1 . 1 , s = ( y 1 x ) 1 ( 1 x y ) 1 .

This system represents a regular element ( y 1 x ) 1 = ( 1 y x ) 1 y , and therefore can be transformed into a regular ALS (Definition 1.12) by scaling row 2 by 1 . Then we add x 1 “from the left”:

x x . . 1 y . x 1 s = 1 1 . 1 , s = x 1 + ( y 1 x ) 1 ( y 1 x ) 1 ( 1 x y ) 1 .

This system is minimal and – after adding row 3 to row 1 (to eliminate the non-zero entry in the right hand side) – we apply the (minimal) inverse of type ( 0 , 0 ) :

(5.3) 1 1 x . . y 1 . . 1 0 x . . . 1 s = 1 . 1 . 1 . 1 .

Now we multiply row 1 and columns 2 and 3 by 1 and exchange columns 2 and 3 to get the following system:

1 x 1 . . 1 y . . . 1 x . . . 1 s = 1 . 1 . 1 . 1 , s = x x y x y x x 1 .

The next step would be a scaling by 1 and the addition of x (by Proposition 2.2). With two minimization steps we would reach again minimality. Alternatively, we can add a linear term to a polynomial (in a polynomial ALS) – depending on the entry v n in the right hand side – directly in the upper right entry of the system matrix:

1 x 1 x . 1 y . . . 1 x . . . 1 s = 1 . 1 . 1 . 1 , s = x y x y x x 1 .

Remarks

The transformation of the ALS (5.3) is a simple case of the refinement of a pivot block and is discussed in detail in Section 3. Hua’s identity is also an example in [6]. It is worth to compare both approaches.

Example 5.4

(Left GCD) Given two polynomials p , q K X \ K , one can compute the left (respectively, right) greatest common divisor of p and q by minimizing an admissible linear system for p 1 q (respectively, p q 1 ). This is now illustrated in the following example. Let p = y x ( 1 y x ) z = y x z y x y x z and q = y ( 1 x y ) y = y 2 y x y 2 . We want to find h = lgcd ( p , q ) . An ALS for p 1 q (constructed out of minimal admissible linear systems for p 1 and q by Proposition 2.6) is

z 1 . . . . . . . . x 1 . . . . . . . 1 y 1 . . . . . . . . x 1 . . . . . . . . y y . . . . . . . . 1 x 1 . . . . . . . 1 y . . . . . . . . 1 y . . . . . . . . 1 s = 1 . 1 . 1 . 1 . 1 . 1 . 1 . 1 . 1 .

Clearly, this system is refined. How to refine an ALS is discussed in Section 3. Note that there is a close connection to the factorization of p, for details see [12]. As a first step, we add column 5 to column 6 and row 6 to row 4,

z 1 . . . 0 . . . . x 1 . . 0 . . . . 1 y 1 . 0 . . . . . . x 1 0 x 1 . . . . . y 0 0 0 0 . . . . . 1 x 1 . . . . . . . 1 y . . . . . . . . 1 y . . . . . . . . 1 s = 1 . 1 . 1 . 1 . 1 . 1 . 1 . 1 . 1 ,

remove rows and columns 5 and 6,

(5.5) z 1 . . . . . . x 1 . . . . . 1 y 1 . . . . . . x x 1 . . . . . 1 y . . . . . . 1 y . . . . . . 1 s = 1 . 1 . 1 . 1 . 1 . 1 . 1

and remember the first (left) divisor h 1 = y we have eliminated. (In the next step with a bigger block one can see immediately how to “read” a divisor directly from the ALS.) Now there are two ways to proceed: If it is not possible to create a zero block in the “L”-form (like before), one can try to change the upper pivot block structure to create a “double-L” zero block. Here, this is possible by subtracting column 4 from column 2 and adding row 2 to row 4. (For details on similarity unique factorization in this context see [12].) Afterward, we apply the (admissible) transformation

( P , Q ) = 1 . . . . . . . 1 . . . 1 . . . 1 . 1 . . . . . 1 . . . . . . . 1 . . . . . . . 1 . . . . . . . 1 , 1 . . . . . . . 1 . . . . . . . 1 . . 1 . . . . 1 1 . . . . . . 1 . . . . . . . 1 . . . . . . . 1

to get the ALS

(5.6) z 1 . . 0 0 . . x 1 . 0 0 y . . y 1 0 0 0 . . 1 x 0 0 0 . . . . 1 y . . . . . . 1 y . . . . . . 1 s = 1 . 1 . 1 . 1 . 1 . 1 . 1 .

How to get this transformation, ( P , Q ) is described in principle in Section 4. One can look directly for a “double-L” block transformation. In the third pivot block of (5.6), one can see immediately that a further (common) left factor is h 2 = 1 x y because the second equation reads x s 2 h 2 1 = 0 . We have eliminated ( 1 x y ) 1 ( 1 x y ) . Recall that a minimal ALS for h 2 is

1 x 1 . 1 y . . 1 s = 1 . 1 . 1 ,

hence rank ( h 2 ) = 3 and (by Theorem 2.11) rank ( h 2 1 ) = 2 . Or, more general, for a (left) factor h i with rank ( h i ) = n i 2 we can construct (by Proposition 2.6) an ALS of dimension 2 ( n i 1 ) . After removing rows and columns { 3 , 4 , 5 , 6 } in the ALS (5.6), we obtain for p 1 q the (in this case minimal) ALS

(5.7) z 1 . . x y . . 1 s = 1 . 1 . 1 .

Hence, h = h 1 h 2 = y ( 1 x y ) = lgcd ( p , q ) . The second possibility – starting from ALS (5.5) – is to do a right minimization step with respect to column 5, then one left with respect to rows 2 and 3 and finally a right (minimization step). Again one obtains the ALS (5.7) (up to admissible scaling of rows and columns) where the right factor y of q remains. Therefore, lgcd ( p , q ) = y y x y . For further details concerning the minimal polynomial multiplication (Proposition 2.10) we refer to [12].

Remark

It can happen that – after there is no more “L”-minimization step possible – the ALS is not minimal, that is, an additional “single” left or right minimization step can be carried out. More details on that are part of the general factorization theory [13]. Here it suffices to take a closer look at the ALS (5.7): both right factors of p (here xz), respectively, q (here y) can still be “read” directly. (This would not be possible any more if one left or right step would be carried out.)

Remark 5.8

For a refined ALS for p 1 , an ALS of dimension n for p 1 q with “factors” of rank n and an alphabet with d letters, the complexity for computing the left (or right) gcd is roughly O ( d n 5 ) . Although in general refinement is difficult because of the necessity to solve systems of polynomial equations (over a not necessarily algebraically closed field), especially for polynomials linear techniques are very useful for the factorization (and therefore for the refinement of the inverse). As an example, we take the polynomial p = ( 1 x y ) ( 2 + y x ) ( 3 y z ) ( 2 z y ) ( 1 x z ) ( 3 + z x ) x of rank n = 14 which has already 64 terms. To get the first left (irreducible) factor (of rank 3), we just need to create an upper right block of zeros (in the system matrix) of size 2 × 11 , which can be accomplished by either using columns 2–3 and rows 4–13 or column 2 and rows 3–13 (for elimination) [12]. Both cases result in a linear system of equations because column and row transformations do not “overlap.” Solving one of these 2 ( n 2 ) systems has (at most) complexity O ( d n 6 ) , hence in total we have O ( d n 7 ) . Checking irreducibility of polynomials (using Gröbner bases) works practically up to rank 12 [29].

6 Epilogue

This work is the last in a series for the development of tools for the work with linear representations (for elements in the free field), especially for the implementation in computer algebra systems. A “practical” guide giving an overview and an introduction is [22] (in German, with remarks on the implementation) and [7].

The main idea is as simple as in the usage of “classical” fractions (for elements in the field of rational numbers): calculating, factorizing and minimizing (or cancelling), for example

2 3 3 4 = 6 12 = 2 3 2 2 3 = 1 2

or

1 2 + 3 2 = 4 2 = 2 2 2 = 2 .

At some point, one stops this loop and uses the fraction (with coprime numerator and denominator). Clearly, one could simplify things by remembering the factorization of the numerator (for the product) and the denominator (for the sum and the product). In our case of the free field, linear representations (or admissible linear systems) are just “free fractions.” However, to understand how the transition from using nc rational expressions (to represent elements in the free field) to (minimal) admissible linear systems in the standard form affects the capabilities of thinking about (free) nc algebra, one needs to go to the meta level [30].

Acknowledgments

I am very grateful to Wolfgang Tutschke and Thomas Hirschler for encouraging me in following my mathematical way in a difficult time. And I thank the anonymous referees for the constructive remarks, in particular to stress the various connections to related areas and improve the exposition. The open access funding is provided by University of Vienna.

References

[1] O. Ore, Linear equations in non-commutative fields, Ann. of Math. (2) 32 (1931), no. 3, 463–477.10.2307/1968245Search in Google Scholar

[2] P. M. Cohn, Free Ideal Rings and Localization in General Rings, Volume 3 of New Mathematical Monographs, Cambridge University Press, Cambridge, 2006.10.1017/CBO9780511542794Search in Google Scholar

[3] T. Y. Lam, Lectures on Modules and Rings, Volume 189 of Graduate Texts in Mathematics, Springer-Verlag, New York, 1999.10.1007/978-1-4612-0525-8Search in Google Scholar

[4] P. M. Cohn, Ringe mit distributivem Faktorverband, Abh. Braunschweig. Wiss. Ges. 33 (1982), 35–40.Search in Google Scholar

[5] P. M. Cohn, Free Rings and their Relations, Volume 19 of London Mathematical Society Monographs, Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], London, second edition, 1985.Search in Google Scholar

[6] P. M. Cohn and C. Reutenauer, A normal form in free fields, Canad. J. Math. 46 (1994), no. 3, 517–531.10.4153/CJM-1994-027-4Search in Google Scholar

[7] K. Schrempf, Free fractions: An invitation to (applied) free fields, 2018, ArXiv e-prints, http://arxiv.org/pdf/1809.05425.Search in Google Scholar

[8] P. M. Cohn, Further Algebra and Applications, Springer-Verlag London, Ltd., London, 2003.10.1007/978-1-4471-0039-3Search in Google Scholar

[9] P. M. Cohn and C. Reutenauer, On the construction of the free field, Internat. J. Algebra Comput. 9 (1999), no. 3–4, 307–323.10.1142/S0218196799000205Search in Google Scholar

[10] P. M. Cohn, Skew Fields, Volume 57 of Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge, 1995.Search in Google Scholar

[11] K. Schrempf, Linearizing the word problem in (some) free fields, Internat. J. Algebra Comput. 28 (2018), no. 7, 1209–1230.10.1142/S0218196718500546Search in Google Scholar

[12] K. Schrempf, On the factorization of non-commutative polynomials (in free associative algebras), J. Symbolic Comput. 94 (2019), 126–148.10.1016/j.jsc.2018.07.004Search in Google Scholar

[13] K. Schrempf, A factorization theory for some free fields, Int. Electron. J. Algebra 28 (2020), 9–42.10.24330/ieja.768114Search in Google Scholar

[14] A. Garg, L. Gurvits, R. Oliveira, and A. Wigderson, A deterministic polynomial time algorithm for non-commutative rational identity testing, in: 57th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2016), 109–117, IEEE Computer Soc., Los Alamitos, CA, 2016.10.1109/FOCS.2016.95Search in Google Scholar

[15] G. Ivanyos, Y. Qiao, and K. V. Subrahmanyam, Constructive non-commutative rank computation is in deterministic polynomial time, Comput. Complexity 27 (2018), no. 4, 561–593.10.1007/s00037-018-0165-7Search in Google Scholar

[16] J. W. Helton, T. Mai, and R. Speicher, Applications of realizations (aka linearizations) to free probability, J. Funct. Anal. 274 (2018), no. 1, 1–79.10.1016/j.jfa.2017.10.003Search in Google Scholar

[17] J. Volčič, Matrix coefficient realization theory of noncommutative rational functions, J. Algebra 499 (2018), 397–437.10.1016/j.jalgebra.2017.12.009Search in Google Scholar

[18] I. Gohberg, P. Lancaster, and L. Rodman, Invariant Subspaces of Matrices with Applications, Volume 51 of Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2006 (reprint of the 1986 original).10.1137/1.9780898719093Search in Google Scholar

[19] J. W. Helton, I. Klep, and J. Volčič, Geometry of free loci and factorization of noncommutative polynomials, Adv. Math. 331 (2018), 589–626.10.1016/j.aim.2018.04.007Search in Google Scholar

[20] J. Berstel and C. Reutenauer, Noncommutative Rational Series with Applications, Volume 137 of Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge, 2011.10.1017/CBO9780511760860Search in Google Scholar

[21] A. Salomaa and M. Soittola, Automata-Theoretic Aspects of Formal Power Series, Texts and Monographs in Computer Science, Springer-Verlag, New York-Heidelberg, 1978.10.1007/978-1-4612-6264-0Search in Google Scholar

[22] K. Schrempf, Über die Konstruktion minimaler linearer Darstellungen von Elementen des freien Schiefkörpers (freier assoziativer Algebren), alias: “Das Rechnen mit Freien Brüchen”, PhD thesis, Universität Wien, 2018.Search in Google Scholar

[23] L. A. Bokut and P. S. Kolesnikov, Gröbner-Shirshov bases: from inception to the present time, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 272 (2000), Vopr. Teor. Predst. Algebr i Grupp. 7, 26–67, 345 (in Russian); J. Math. Sci. (N. Y.) 116 (2003), no. 1, 2894–2916 (Translation).Search in Google Scholar

[24] B. Sturmfels, Solving Systems of Polynomial Equations, Volume 97 of CBMS Regional Conference Series in Mathematics, Published for the Conference Board of the Mathematical Sciences, Washington, DC; The American Mathematical Society, Providence, RI, 2002.10.1090/cbms/097Search in Google Scholar

[25] D. A. Cox, J. Little, and D. O’Shea, Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra, Undergraduate Texts in Mathematics, Springer, Cham, fourth edition, 2015.10.1007/978-3-319-16721-3Search in Google Scholar

[26] J. W. Demmel, Applied Numerical Linear Algebra, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1997.10.1137/1.9781611971446Search in Google Scholar

[27] A. Cardon and M. Crochemore, Détermination de la représentation standard d’une série reconnaissable, RAIRO Inform. Théor. 14 (1980), no. 4, 371–379.10.1051/ita/1980140403711Search in Google Scholar

[28] S. A. Amitsur, Rational identities and applications to algebra and geometry, J. Algebra 3 (1966), 304–359.10.1016/0021-8693(66)90004-4Search in Google Scholar

[29] B. Janko, Factorization of non-commutative Polynomials and Testing Fullness of Matrices, Diplomarbeit, TU Graz, 2018.Search in Google Scholar

[30] S. Krämer, Mathematizing power, formalization, and the diagrammatical mind or: what does “Computation” mean? Philos. Technol. 27 (2014), no. 3, 345–357.10.1007/s13347-012-0094-3Search in Google Scholar

Received: 2020-02-18
Revised: 2020-06-28
Accepted: 2020-07-28
Published Online: 2020-11-23

© 2020 Konrad Schrempf, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. Non-occurrence of the Lavrentiev phenomenon for a class of convex nonautonomous Lagrangians
  3. Strong and weak convergence of Ishikawa iterations for best proximity pairs
  4. Curve and surface construction based on the generalized toric-Bernstein basis functions
  5. The non-negative spectrum of a digraph
  6. Bounds on F-index of tricyclic graphs with fixed pendant vertices
  7. Crank-Nicolson orthogonal spline collocation method combined with WSGI difference scheme for the two-dimensional time-fractional diffusion-wave equation
  8. Hardy’s inequalities and integral operators on Herz-Morrey spaces
  9. The 2-pebbling property of squares of paths and Graham’s conjecture
  10. Existence conditions for periodic solutions of second-order neutral delay differential equations with piecewise constant arguments
  11. Orthogonal polynomials for exponential weights x2α(1 – x2)2ρe–2Q(x) on [0, 1)
  12. Rough sets based on fuzzy ideals in distributive lattices
  13. On more general forms of proportional fractional operators
  14. The hyperbolic polygons of type (ϵ, n) and Möbius transformations
  15. Tripled best proximity point in complete metric spaces
  16. Metric completions, the Heine-Borel property, and approachability
  17. Functional identities on upper triangular matrix rings
  18. Uniqueness on entire functions and their nth order exact differences with two shared values
  19. The adaptive finite element method for the Steklov eigenvalue problem in inverse scattering
  20. Existence of a common solution to systems of integral equations via fixed point results
  21. Fixed point results for multivalued mappings of Ćirić type via F-contractions on quasi metric spaces
  22. Some inequalities on the spectral radius of nonnegative tensors
  23. Some results in cone metric spaces with applications in homotopy theory
  24. On the Malcev products of some classes of epigroups, I
  25. Self-injectivity of semigroup algebras
  26. Cauchy matrix and Liouville formula of quaternion impulsive dynamic equations on time scales
  27. On the symmetrized s-divergence
  28. On multivalued Suzuki-type θ-contractions and related applications
  29. Approximation operators based on preconcepts
  30. Two types of hypergeometric degenerate Cauchy numbers
  31. The molecular characterization of anisotropic Herz-type Hardy spaces with two variable exponents
  32. Discussions on the almost 𝒵-contraction
  33. On a predator-prey system interaction under fluctuating water level with nonselective harvesting
  34. On split involutive regular BiHom-Lie superalgebras
  35. Weighted CBMO estimates for commutators of matrix Hausdorff operator on the Heisenberg group
  36. Inverse Sturm-Liouville problem with analytical functions in the boundary condition
  37. The L-ordered L-semihypergroups
  38. Global structure of sign-changing solutions for discrete Dirichlet problems
  39. Analysis of F-contractions in function weighted metric spaces with an application
  40. On finite dual Cayley graphs
  41. Left and right inverse eigenpairs problem with a submatrix constraint for the generalized centrosymmetric matrix
  42. Controllability of fractional stochastic evolution equations with nonlocal conditions and noncompact semigroups
  43. Levinson-type inequalities via new Green functions and Montgomery identity
  44. The core inverse and constrained matrix approximation problem
  45. A pair of equations in unlike powers of primes and powers of 2
  46. Miscellaneous equalities for idempotent matrices with applications
  47. B-maximal commutators, commutators of B-singular integral operators and B-Riesz potentials on B-Morrey spaces
  48. Rate of convergence of uniform transport processes to a Brownian sheet
  49. Curves in the Lorentz-Minkowski plane with curvature depending on their position
  50. Sequential change-point detection in a multinomial logistic regression model
  51. Tiny zero-sum sequences over some special groups
  52. A boundedness result for Marcinkiewicz integral operator
  53. On a functional equation that has the quadratic-multiplicative property
  54. The spectrum generated by s-numbers and pre-quasi normed Orlicz-Cesáro mean sequence spaces
  55. Positive coincidence points for a class of nonlinear operators and their applications to matrix equations
  56. Asymptotic relations for the products of elements of some positive sequences
  57. Jordan {g,h}-derivations on triangular algebras
  58. A systolic inequality with remainder in the real projective plane
  59. A new characterization of L2(p2)
  60. Nonlinear boundary value problems for mixed-type fractional equations and Ulam-Hyers stability
  61. Asymptotic normality and mean consistency of LS estimators in the errors-in-variables model with dependent errors
  62. Some non-commuting solutions of the Yang-Baxter-like matrix equation
  63. General (p,q)-mixed projection bodies
  64. An extension of the method of brackets. Part 2
  65. A new approach in the context of ordered incomplete partial b-metric spaces
  66. Sharper existence and uniqueness results for solutions to fourth-order boundary value problems and elastic beam analysis
  67. Remark on subgroup intersection graph of finite abelian groups
  68. Detectable sensation of a stochastic smoking model
  69. Almost Kenmotsu 3-h-manifolds with transversely Killing-type Ricci operators
  70. Some inequalities for star duality of the radial Blaschke-Minkowski homomorphisms
  71. Results on nonlocal stochastic integro-differential equations driven by a fractional Brownian motion
  72. On surrounding quasi-contractions on non-triangular metric spaces
  73. SEMT valuation and strength of subdivided star of K 1,4
  74. Weak solutions and optimal controls of stochastic fractional reaction-diffusion systems
  75. Gradient estimates for a weighted nonlinear parabolic equation and applications
  76. On the equivalence of three-dimensional differential systems
  77. Free nonunitary Rota-Baxter family algebras and typed leaf-spaced decorated planar rooted forests
  78. The prime and maximal spectra and the reticulation of residuated lattices with applications to De Morgan residuated lattices
  79. Explicit determinantal formula for a class of banded matrices
  80. Dynamics of a diffusive delayed competition and cooperation system
  81. Error term of the mean value theorem for binary Egyptian fractions
  82. The integral part of a nonlinear form with a square, a cube and a biquadrate
  83. Meromorphic solutions of certain nonlinear difference equations
  84. Characterizations for the potential operators on Carleson curves in local generalized Morrey spaces
  85. Some integral curves with a new frame
  86. Meromorphic exact solutions of the (2 + 1)-dimensional generalized Calogero-Bogoyavlenskii-Schiff equation
  87. Towards a homological generalization of the direct summand theorem
  88. A standard form in (some) free fields: How to construct minimal linear representations
  89. On the determination of the number of positive and negative polynomial zeros and their isolation
  90. Perturbation of the one-dimensional time-independent Schrödinger equation with a rectangular potential barrier
  91. Simply connected topological spaces of weighted composition operators
  92. Generalized derivatives and optimization problems for n-dimensional fuzzy-number-valued functions
  93. A study of uniformities on the space of uniformly continuous mappings
  94. The strong nil-cleanness of semigroup rings
  95. On an equivalence between regular ordered Γ-semigroups and regular ordered semigroups
  96. Evolution of the first eigenvalue of the Laplace operator and the p-Laplace operator under a forced mean curvature flow
  97. Noetherian properties in composite generalized power series rings
  98. Inequalities for the generalized trigonometric and hyperbolic functions
  99. Blow-up analyses in nonlocal reaction diffusion equations with time-dependent coefficients under Neumann boundary conditions
  100. A new characterization of a proper type B semigroup
  101. Constructions of pseudorandom binary lattices using cyclotomic classes in finite fields
  102. Estimates of entropy numbers in probabilistic setting
  103. Ramsey numbers of partial order graphs (comparability graphs) and implications in ring theory
  104. S-shaped connected component of positive solutions for second-order discrete Neumann boundary value problems
  105. The logarithmic mean of two convex functionals
  106. A modified Tikhonov regularization method based on Hermite expansion for solving the Cauchy problem of the Laplace equation
  107. Approximation properties of tensor norms and operator ideals for Banach spaces
  108. A multi-power and multi-splitting inner-outer iteration for PageRank computation
  109. The edge-regular complete maps
  110. Ramanujan’s function k(τ)=r(τ)r2(2τ) and its modularity
  111. Finite groups with some weakly pronormal subgroups
  112. A new refinement of Jensen’s inequality with applications in information theory
  113. Skew-symmetric and essentially unitary operators via Berezin symbols
  114. The limit Riemann solutions to nonisentropic Chaplygin Euler equations
  115. On singularities of real algebraic sets and applications to kinematics
  116. Results on analytic functions defined by Laplace-Stieltjes transforms with perfect ϕ-type
  117. New (p, q)-estimates for different types of integral inequalities via (α, m)-convex mappings
  118. Boundary value problems of Hilfer-type fractional integro-differential equations and inclusions with nonlocal integro-multipoint boundary conditions
  119. Boundary layer analysis for a 2-D Keller-Segel model
  120. On some extensions of Gauss’ work and applications
  121. A study on strongly convex hyper S-subposets in hyper S-posets
  122. On the Gevrey ultradifferentiability of weak solutions of an abstract evolution equation with a scalar type spectral operator on the real axis
  123. Special Issue on Graph Theory (GWGT 2019), Part II
  124. On applications of bipartite graph associated with algebraic structures
  125. Further new results on strong resolving partitions for graphs
  126. The second out-neighborhood for local tournaments
  127. On the N-spectrum of oriented graphs
  128. The H-force sets of the graphs satisfying the condition of Ore’s theorem
  129. Bipartite graphs with close domination and k-domination numbers
  130. On the sandpile model of modified wheels II
  131. Connected even factors in k-tree
  132. On triangular matroids induced by n3-configurations
  133. The domination number of round digraphs
  134. Special Issue on Variational/Hemivariational Inequalities
  135. A new blow-up criterion for the Nabc family of Camassa-Holm type equation with both dissipation and dispersion
  136. On the finite approximate controllability for Hilfer fractional evolution systems with nonlocal conditions
  137. On the well-posedness of differential quasi-variational-hemivariational inequalities
  138. An efficient approach for the numerical solution of fifth-order KdV equations
  139. Generalized fractional integral inequalities of Hermite-Hadamard-type for a convex function
  140. Karush-Kuhn-Tucker optimality conditions for a class of robust optimization problems with an interval-valued objective function
  141. An equivalent quasinorm for the Lipschitz space of noncommutative martingales
  142. Optimal control of a viscous generalized θ-type dispersive equation with weak dissipation
  143. Special Issue on Problems, Methods and Applications of Nonlinear analysis
  144. Generalized Picone inequalities and their applications to (p,q)-Laplace equations
  145. Positive solutions for parametric (p(z),q(z))-equations
  146. Revisiting the sub- and super-solution method for the classical radial solutions of the mean curvature equation
  147. (p,Q) systems with critical singular exponential nonlinearities in the Heisenberg group
  148. Quasilinear Dirichlet problems with competing operators and convection
  149. Hyers-Ulam-Rassias stability of (m, n)-Jordan derivations
  150. Special Issue on Evolution Equations, Theory and Applications
  151. Instantaneous blow-up of solutions to the Cauchy problem for the fractional Khokhlov-Zabolotskaya equation
  152. Three classes of decomposable distributions
Downloaded on 29.1.2026 from https://www.degruyterbrill.com/document/doi/10.1515/math-2020-0076/html
Scroll to top button