Startseite Convergence rate of the modified Levenberg-Marquardt method under Hölderian local error bound
Artikel Open Access

Convergence rate of the modified Levenberg-Marquardt method under Hölderian local error bound

  • Lin Zheng EMAIL logo , Liang Chen und Yangxin Tang
Veröffentlicht/Copyright: 12. September 2022

Abstract

In this article, we analyze the convergence rate of the modified Levenberg-Marquardt (MLM) method under the Hölderian local error bound condition and the Hölderian continuity of the Jacobian, which are more general than the local error bound condition and the Lipschitz continuity of the Jacobian. Under special circumstances, the convergence rate of the MLM method coincides with the results presented by Fan. A globally convergent MLM algorithm by the trust region technique will also be given.

MSC 2010: 65K05; 90C30

1 Introduction

We consider the system of nonlinear equations

(1.1) F ( x ) = 0 ,

where F ( x ) : R n R n is continuously differentiable. Denote by X and , the solution set of (1.1) and 2-norms, respectively. Throughout the article, we assume that X is nonempty because equation (1.1) may have no solutions for the nonlinearity of F ( x ) .

Nonlinear equations play an important role in many fields of science, and many numerical methods are developed to solve nonlinear equations [1,2,3]. Many efficient solution techniques such as the Newton method, quasi-Newton methods, the Gauss-Newton method, trust region methods, and the Levenberg-Marquardt method are available for this problem [1,2, 3,4,5, 6,7,8, 9,10,11, 12,13,14, 15,16,17, 18,19,20, 21,22].

The most common method to solve (1.1) is the Newton method. At every iteration, it computes the trial step

(1.2) d k N = J k 1 F k ,

where F k = F ( x k ) and J k = F ( x k ) is the Jacobian. If J ( x ) is Lipschitz continuous and nonsingular at the solution, then the convergence of the Newton method is quadratic. However, the Newton method has some disadvantages, especially when the Jacobian matrix J k is singular or near singular. To overcome the difficulties caused by the possible singularity of J k , the Levenberg-Marquardt method [2,3] computes the trial step by

(1.3) d k LM = ( J k T J k + λ k I ) 1 J k T F k ,

where λ k > 0 is the LM parameter that is updated in every iteration.

However, it is too strong to assume that the Jacobian is nonsingular. The local error bound requirement is weaker than the nonsingularity condition. It is necessary that

(1.4) c dist ( x , X ) F ( x ) , x N ( x )

holds for some constant c > 0 , where dist ( x , X ) is the distance from x to X and N ( x ) is some neighborhood of x X .

Under the local error bound condition, Yamashita and Fukushima [4] and Fan and Yuan [5] show that the LM method has quadratic convergence if the LM parameter was chosen as λ k = F k 2 and λ k = F k α with α [ 1 , 2 ] , respectively. Interested readers are referred to [6,7,8] for related work.

Inspired by the two-step Newton’s method, Fan [9] presented a modified Levenberg-Marquardt (MLM) method with an approximate LM step

(1.5) d k MLM = ( J k T J k + λ k I ) 1 J k T F ( y k ) ,

where y k = x k + d k LM , and the trial step is

(1.6) s k = d k LM + d k MLM .

The MLM method has cubic convergence under the local error bound condition. For more general cases, Fan [19] gave an accelerated version of the MLM method. She also extended the LM parameter λ k = F ( x k ) α from α [ 1 , 2 ] to α [ 0 , 2 ] . The convergence order of the accelerating MLM method is min { 1 + 2 α , 3 } , which is a continuous function of α .

To save more Jacobian calculations and achieve a fast convergence rate, Zhao and Fan [10] and Chen [11] presented a higher-order Levenberg-Marquardt method by computing the approximate step twice, and the method has biquadratic convergence under the local error bound condition.

At present, the Levenberg-Marquardt method is very widely used. It is a classical method for solving nonlinear least squares problems, and it can be used to solve financial problems [23,24]. In real applications, some nonlinear equations may not satisfy the local error bound condition but satisfy the Hölderian local error bound condition defined as follows.

Definition 1.1

We say that F ( x ) provides a Hölderian local error bound of order γ ( 0 , 1 ] in some neighborhood of x X , if there exists a constant c > 0 such that

(1.7) c dist ( x , X ) F ( x ) γ , x N ( x ) .

We can see that the Hölderian local error bound condition is more generalized from (1.4) and (1.7); when γ = 1 , the local error bound condition is included as a special case. Hence, the local error bound condition is stronger. For example, the Powell singular function [25]

h ( x 1 , x 2 , x 3 , x 4 ) = ( x 1 + 10 x 2 , 5 ( x 3 x 4 ) , ( x 2 2 x 3 ) 2 , 10 ( x 1 x 4 ) 2 ) T

satisfies the Hölderian local error bound condition of order 1 2 around the zero point but does not satisfy the local error bound condition [12]. In a biochemical reaction network, the problem of finding the moiety conserved steady state can be formulated as a system of nonlinear equations, which satisfies the Hölderian local error bound condition [12]. Recently, some scholars discussed the convergence results of the LM method under the Hölderian local error bound condition and the Hölderian continuity of the Jacobian [12,13, 14,22]. In this article, we will investigate the convergence rate of the MLM method under the Hölderian local error bound condition and Hölderian continuity of the Jacobian, which are more general than the local error bound condition and the Lipschitz continuity of the Jacobian.

This article is organized as follows. In Section 2, we propose an MLM algorithm and show that it converges globally under the Hölderian continuity of the Jacobian. In Section 3, we study the convergence rate of the algorithm under the Hölderian local error bound condition and the Hölderian continuity of the Jacobian. We finish the work with some conclusions and references.

2 A globally convergent MLM algorithm

In this section, we propose an MLM algorithm by the trust region technique and then prove that it converges globally under the Hölderian continuity of the Jacobian.

We take

(2.1) Φ ( x ) = F ( x ) 2

as the merit function for (1.1). We define the actual reduction of Φ ( x ) at the k th iteration as

(2.2) Ared k = F k 2 F ( x k + d k LM + d k MLM ) 2 .

The predicted reduction needs to be nonnegative.

Note that the step d k LM in (1.3) is the minimizer of the convex minimization problem

(2.3) min d R n F k + J k d 2 + λ k d 2 φ k , 1 ( d ) .

If we let

(2.4) Δ k , 1 = d k LM = ( J k T J k + λ k I ) 1 J k T F k ,

then it can be verified that d k LM is also a solution of the trust region subproblem

(2.5) min d R n F k + J k d 2 s.t. d Δ k , 1 .

We obtain from the result given by Powell [15] that

(2.6) F k 2 F k + J k d k LM 2 J k T F k min d k LM , J k T F k J k T J k .

In the same way, the step d k MLM in (1.5) is not only the minimizer of the problem

(2.7) min d R n F ( y k ) + J k d 2 + λ k d 2 φ k , 2 ( d ) ,

but also the solution of the trust region problem

(2.8) min d R n F ( y k ) + J k d 2 s.t. d Δ k , 2 ,

where

(2.9) Δ k , 2 = d k MLM = ( J k T J k + λ k I ) 1 J k T F ( y k ) .

Thus, we also obtain

(2.10) F ( y k ) 2 F ( y k ) + J k d k MLM 2 J k T F ( y k ) min d k MLM , J k T F ( y k ) J k T J k .

We define the newly predicted reduction from (2.6) and (2.10) as follows:

(2.11) Pred k = F k 2 F k + J k d k LM 2 + F ( y k ) 2 F ( y k ) + J k d k MLM 2 ,

which satisfies

(2.12) Pred k J k T F k min d k LM , J k T F k J k T J k + J k T F ( y k ) min d k MLM , J k T F ( y k ) J k T J k .

The ratio of the actual reduction to the predicted reduction

(2.13) r k = Ared k Pred k

is used in deciding whether to accept the trial step and how to update the MLM parameter λ k .

The algorithm is presented as follows.

Algorithm 2.1

Given x 1 R n , 0 < α 2 , μ 1 > m > 0 , 0 < p 0 p 1 p 2 < 1 , a 1 > 1 > a 2 > 0 . Set k 1 .

Step 1. If J k T F k = 0 , then stop. Solve

(2.14) ( J k T J k + λ k I ) d = J k T F k with λ k = μ k F k α

to obtain d k LM and set

y k = x k + d k LM .

Solve

(2.15) ( J k T J k + λ k I ) d = J k T F ( y k )

to obtain d k MLM and set

s k = d k LM + d k MLM .

Step 2. Compute r k = Ared k / Pred k . Set

(2.16) x k + 1 = x k + s k , if r k p 0 , x k , otherwise .

Step 3. Choose μ k + 1 as

(2.17) μ k + 1 = a 1 μ k , if r k < p 1 , μ k , if r k [ p 1 , p 2 ] , max { a 2 μ k , m } , if r k > p 2 .

Set k k + 1 and go to Step 1.

We give some assumptions before studying the global convergence of Algorithm 2.1.

Assumption 2.2

  1. The Jacobian J ( x ) is Hölderian continuous of order v ( 0 , 1 ] , i.e., there exists a positive constant κ h j such that

    (2.18) J ( x ) J ( y ) κ h j x y v , x , y R n .

  2. J ( x ) is bounded above, i.e., there exists a positive constant κ b j such that

    (2.19) J ( x ) κ b j , x R n .

From (2.18), we can obtain

(2.20) F ( y ) F ( x ) J ( x ) ( y x ) = 0 1 J ( x + t ( y x ) ) ( y x ) d t J ( x ) ( y x )

y x 0 1 J ( x + t ( y x ) ) J ( x ) d t κ h j y x 1 + v 0 1 t v d t = κ h j 1 + v y x 1 + v .

Theorem 2.3

Let Assumption 2.2 hold. Then Algorithm 2.1 terminates in finite iterations or satisfies

(2.21) lim k J k T F k = 0 .

Proof

We prove the theorem by contradiction. Suppose that (2.21) is not true, then there exists a positive constant τ and infinitely many k such that

(2.22) J k T F k τ .

Let S 1 , S 2 be the sets of the indices as follows:

S 1 = { k J k T F k τ } , S 2 = k J k T F k τ 2 and x k + 1 x k .

Then, S 1 is an infinite set. In the following, we will derive the contradictions whether S 2 is finite or infinite.

Case I: S 2 is finite. Then, the set

S 3 = { k J k T F k τ and x k + 1 x k }

is also finite. Let k ˜ be the largest index of S 3 . Then, x k + 1 = x k holds for all k { k > k ˜ k S 1 } . Define the indices set

S 4 = { k > k ˜ J k T F k τ and x k + 1 = x k } .

If k S 4 , we can deduce that J k + 1 T F k + 1 τ and x k + 2 = x k + 1 . Hence, we have k + 1 S 4 . By induction, we know that J k T F k τ and x k + 1 = x k hold for all k > k ˜ , which implies that r k < p 0 . Therefore, we have

(2.23) μ k and λ k

due to (2.14). Hence, we obtain

(2.24) d k LM 0 .

Moreover, it follows from (2.8), (2.20), and (2.23) that

(2.25) d k MLM = ( J k T J k + λ k I ) 1 J k T F ( y k ) ( J k T J k + λ k I ) 1 J k T F k + ( J k T J k + λ k I ) 1 J k T J k d k LM + κ h j 1 + v d k LM 1 + v ( J k T J k + λ k I ) 1 J k T d k LM + d k LM + κ h j κ b j λ k ( 1 + v ) d k LM 1 + v c 1 d k LM

holds for all sufficiently large k , where c 1 is a positive constant. Therefore, we have

(2.26) s k = d k LM + d k MLM ( 1 + c 1 ) d k LM .

Furthermore, it follows from (2.12), (2.19), (2.22), (2.24), and (2.26) that

(2.27) r k 1 = Ared k Pred k Pred k F ( x k + d k LM + d k MLM ) 2 F k + J k d k LM 2 + F ( y k ) 2 F ( y k ) + J k d k MLM 2 J k T F k min d k LM , J k T F k J k T J k + J k T F ( y k ) min d k MLM , J k T F ( y k ) J k T J k F k + J k s k O ( d k LM 1 + v ) + O ( d k LM 2 + 2 v ) + F k + J k d k LM O ( d k LM 1 + v ) J k T F k min d k LM , J k T F k J k T J k F k + J k d k O ( d k LM 1 + v ) + O ( d k LM 2 + v ) + O ( d k LM 2 + 2 v ) J k T F k min d k LM , J k T F k J k T J k 0 ,

which implies that r k 1 . In view of the updating rule of μ k , we know that there exists a positive constant m ˜ > m such that μ k < m ˜ holds for all sufficiently large k , which is a contradiction to (2.23).

Case II: S 2 is infinite. It follows from (2.12) and (2.19) that

(2.28) F 1 2 k S 2 ( F k 2 F k + 1 2 ) k S 2 p 0 Pred k k S 2 p 0 J k T F k min d k LM , J k T F k J k T J k + J k T F ( y k ) min d k MLM , J k T F ( y k ) J k T J k k S 2 p 0 τ 2 min d k LM , τ 2 κ b j 2 ,

which implies

(2.29) lim k , k S 2 d k LM = 0 .

Then, from definition of d k LM , we have

(2.30) λ k + , k S 2 .

Similarly to (2.25), there exists a positive c 2 such that

(2.31) d k MLM c 2 d k LM

holds for all sufficiently large k S 2 . From (2.28), we obtain

(2.32) s k d k LM + d k MLM ( 1 + c 2 ) d k LM .

So, we derive that

(2.33) k S 2 s k = k S 2 d k LM + d k MLM < + .

Furthermore, it follows from (2.18) and (2.19) that

k S 2 J k T F k J k + 1 T F k + 1 < + .

Since (2.22) holds for infinitely many k , there exists a large k ˆ such that J k T F k τ and

k S 2 , k k ˆ J k T F k J k + 1 T F k + 1 < τ 2 .

From (2.28) to (2.31), we can deduce that lim k x k exists and

(2.34) d k LM 0 , d k MLM 0 .

Therefore, we can obtain

(2.35) μ k + .

In the same way as proved in case I, we can also have

r k 1 .

Hence, there exists a positive constant m ¯ > m such that μ k < m ¯ holds for all sufficiently large k , which is a contradiction to (2.35). The proof is completed.□

3 Convergence rate of Algorithm 2.1

In this section, we analyze the convergence rate of Algorithm 2.1 under the Hölderian local error bound condition and the Hölderian continuity of the Jacobian. We assume that the sequence { x k } generated by the MLM method converges to the solution set X of (1.1) and lies in some neighborhood of x X .

First, we will make the following assumption for studying the local convergence theory.

Assumption 3.1

  1. F ( x ) provides a Hölderian local error bound of order γ ( 0 , 1 ] in some neighborhood of x X , i.e., there exist constants c > 0 and 0 < b < 1 such that

    (3.1) c dist ( x , X ) F ( x ) γ , x N ( x , b ) ,

    where N ( x , b ) = { x R n x x b } .

  2. J ( x ) is Hölderian continuous of order v ( 0 , 1 ] , i.e., there exists a positive constant κ h j such that

    (3.2) J ( x ) J ( y ) κ h j x y v , x , y N ( x , b ) .

Similar to (2.20), we have

(3.3) F ( y ) F ( x ) J ( x ) ( y x ) κ h j 1 + v y x 1 + v , x , y N ( x , b ) .

Moreover, there exists a constant κ b f > 0 such that

(3.4) F ( y ) F ( x ) κ b f y x , x , y N ( x , b ) .

In the following, we denote by x ¯ k the vector in X that satisfies

x ¯ k x k = dist ( x k , X ) .

3.1 Properties of d k LM and d k MLM

In the section, we investigate the relationship among d k LM , d k MLM , and dist ( x k , X ) .

Suppose the singular value decomposition (SVD) of J ( x ¯ k ) is

J ¯ k = U ¯ k Σ ¯ k V ¯ k T = ( U ¯ k , 1 , U ¯ k , 2 ) Σ ¯ k , 1 0 V ¯ k , 1 T V ¯ k , 2 T = U ¯ k , 1 Σ ¯ k , 1 V ¯ k , 1 T ,

where Σ ¯ k , 1 = diag ( σ ¯ k , 1 , , σ ¯ k , r ) with σ ¯ k , 1 σ ¯ k , 2 σ ¯ k , r > 0 . The corresponding SVD of J k is

J k = U k Σ k V k T = ( U k , 1 , U k , 2 , U k , 3 ) Σ k , 1 Σ k , 2 0 V k , 1 T V k , 2 T V k , 3 T = U k , 1 Σ k , 1 V k , 1 T + U k , 2 Σ k , 2 V k , 2 T ,

where Σ k , 1 = diag ( σ k , 1 , , σ k , r ) with σ k , 1 σ k , 2 σ k , r > 0 , and Σ k , 2 = diag ( σ k , r + 1 , , σ k , r + q ) with σ k , r σ k , r + 1 σ k , r + q > 0 . In the following, if the context is clear, we will omit the subscription k in Σ k , i , U k , i and V k , i ( i = 1 , 2 , 3 ) and write J k as

J k = U 1 Σ 1 V 1 T + U 2 Σ 2 V 2 T .

Lemma 3.2

Under the conditions of Assumption 3.1, if x k , y k N ( x , b / 2 ) , then there exists a constant c 3 > 0 such that

(3.5) s k c 3 dist ( x k , X ) min ( 1 , 1 + v α / 2 γ , ( 1 + v ) ( 1 + v α / 2 γ ) + v α / γ )

holds for all sufficiently large k.

Proof

Since x k N ( x , b / 2 ) , we obtain

x ¯ k x x ¯ k x k + x k x 2 x k x b ,

which implies that x ¯ k N ( x , b ) . From (3.1) and (2.17), we have

(3.6) λ k = μ k F k α m c α / γ x ¯ k x k α / γ .

From (3.3), we can obtain

F k + J k ( x ¯ k x k ) 2 = F ( x ¯ k ) F k J k ( x ¯ k x k ) 2 κ h j 1 + v 2 x ¯ k x k 2 + 2 v .

Since d k LM is the minimizer of φ k , 1 ( d ) , we have

(3.7) d k LM 2 φ k , 1 ( d k LM ) λ k φ k , 1 ( x ¯ k x k ) λ k = F k + J k ( x ¯ k x k ) 2 + λ k x ¯ k x k 2 λ k κ h j 2 c α / γ m ( 1 + v ) 2 x ¯ k x k 2 + 2 v α / γ + x ¯ k x k 2 c 4 2 x ¯ k x k 2 min ( 1 , 1 + v α / 2 γ ) ,

where c 4 = κ h j 2 c α / γ / m ( 1 + v ) 2 + 1 . Then

(3.8) d k LM c 4 x ¯ k x k min ( 1 , 1 + v α / 2 γ ) .

It follows from (3.3), we have

(3.9) d k MLM = ( J k T J k + λ k I ) 1 J k T F ( y k ) ( J k T J k + λ k I ) 1 J k T F k + ( J k T J k + λ k I ) 1 J k T J k d k LM + κ h j 1 + v d k LM 1 + v ( J k T J k + λ k I ) 1 J k T 2 d k LM + κ h j 1 + v d k LM 1 + v ( J k T J k + λ k I ) 1 J k T .

Now, using the SVD of J k , we can obtain

(3.10) ( J k T J k + λ k I ) 1 J k T = ( V 1 , V 2 , V 3 ) Σ k , 1 Σ k , 2 0 U 1 T U 2 T U 3 T

( Σ 1 2 + λ k I ) 1 Σ 1 ( Σ 2 2 + λ k I ) 1 Σ 2 0 Σ 1 1 λ k 1 Σ 2 .

By the theory of matrix perturbation [26], we have

diag ( Σ 1 Σ ¯ 1 , Σ 2 , 0 ) J k J ( x ¯ k ) κ h j x ¯ k x k v .

The above inequalities imply

(3.11) Σ 1 Σ ¯ 1 κ h j x ¯ k x k v , Σ 2 κ h j x ¯ k x k v .

Since { x k } converges to x , without loss of generality, we assume that κ h j x ¯ k x k v σ ¯ r 2 holds for all large k . From (3.11), we have

(3.12) Σ 1 1 1 σ ¯ r κ h j x ¯ k x k v 2 σ ¯ r .

From (3.6), we can derive

(3.13) λ k 1 Σ 2 = Σ 2 μ k F ( x k ) α κ h j m c α / γ x ¯ k x k v α γ .

From (3.9) and (3.10), we have that there exist positive c 5 and c ¯ such that

(3.14) d k MLM 2 d k LM + c 5 d k LM 1 + v x ¯ k x k v α / γ c ¯ x ¯ k x k min { 1 , 1 + v α / 2 γ , ( 1 + v ) ( 1 + v α / 2 γ ) + v α / γ }

holds for all sufficiently large k . Therefore, we can obtain

(3.15) s k d k LM + d k MLM d k LM + d k MLM c 3 x ¯ k x k min { 1 , 1 + v α / 2 γ , ( 1 + v ) ( 1 + v α / 2 γ ) + v α / γ } ,

where c 3 is a positive constant. The proof is completed.□

The updating rule of μ k indicates that μ k is bounded below. Next, we show that μ k is also bounded above.

Lemma 3.3

Under the conditions of Assumption 3.1, if x k , y k N ( x , b / 2 ) and

v > max 1 γ 1 , 1 γ ( 1 + v ) α 2 1 , 1 γ γ ( 1 + v ) α 2 ,

then there exists a constant M > m such that

(3.16) μ k M

holds for all sufficiently large k.

Proof

First, we prove that for all sufficiently large k

(3.17) Pred k c ˘ F k d k LM max { 1 / γ , 1 / ( γ ( 1 + v ) α / 2 ) , ( 1 γ ) / ( γ ( 1 + v ) α / 2 ) + 1 } ,

where c ˘ is a positive constant.

We consider two cases:

Case 1: x ¯ k x k d k LM . It follows from (3.1), (3.3), (3.8), and v > 1 / γ 1 that

(3.18) F k F k + J k d k LM F k F k + J k ( x ¯ k x k ) c 1 / γ x ¯ k x k 1 / γ κ h j 1 + v x ¯ k x k 1 + v c 1 / γ x ¯ k x k 1 / γ c 6 d k LM max { 1 / γ , 1 / ( γ ( 1 + v ) α / 2 ) }

holds for some c 6 > 0 .

Case 2: x ¯ k x k > d k LM . From (3.18), we can obtain

(3.19) F k F k + J k d k LM F k F k + d k LM x ¯ k x k J k ( x ¯ k x k ) F k 1 d k LM x ¯ k x k F k + d k LM x ¯ k x k ( F k + J k ( x ¯ k x k ) ) d k LM x ¯ k x k ( F k F k + J k ( x ¯ k x k ) ) c 7 d k LM x ¯ k x k 1 / γ 1 c ˘ d k LM max { 1 / γ , ( 1 γ ) / ( γ ( 1 + v ) α / 2 ) + 1 }

holds for some c 7 , c ˘ > 0 .

From (3.18) and (3.19), we have

(3.20) F k 2 F k + J k d k LM 2 = ( F k + F k + J k d k LM ) ( F k F k + J k d k LM ) F k ( F k F k + J k d k LM ) c ˘ F k d k LM max ( 1 / γ , 1 / ( γ ( 1 + v ) α / 2 ) , ( 1 γ ) / ( γ ( 1 + v ) α / 2 ) + 1 } .

Since d k MLM is a solution of (2.8), we know that F ( y k ) 2 F ( y k ) + J k d k MLM 2 0 . Hence, we obtain

Pred k = F k 2 F k + J k d k LM 2 + F ( y k ) 2 F ( y k ) + J k d k MLM 2 F k 2 F k + J k d k LM 2 c ˘ F k d k LM max ( 1 / γ , 1 / ( γ ( 1 + v ) α / 2 ) , ( 1 γ ) / ( γ ( 1 + v ) α / 2 ) + 1 } .

It follows from (3.3), (3.8), and (3.17) that

r k 1 = Ared k Pred k Pred k = F ( x k + d k LM + d k MLM ) 2 F k + J k d k LM 2 + F ( y k ) 2 F ( y k ) + J k d k MLM 2 Pred k F k + J k s k O ( d k LM 1 + v ) + O ( d k LM 2 + 2 v ) + F k + J k d k LM O ( d k LM 1 + v ) c ˘ F k d k LM max { 1 / γ , 1 / ( γ ( 1 + v ) α / 2 ) , ( 1 γ ) / ( γ ( 1 + v ) α / 2 ) + 1 } .

In view of (3.4), (3.8), (3.9), and (3.14), we have

(3.21) F k + J k d k LM F k

and

(3.22) F k + J k s k F k + J k d k LM + J k d k MLM F k + κ b f d k MLM O ( x ¯ k x k min { 1 , 1 + v α / 2 γ , ( 1 + v ) ( 1 + v α / 2 γ ) + v α / γ } ) .

Since

v > max 1 γ 1 , 1 γ ( 1 + v ) α 2 1 , 1 γ γ ( 1 + v ) α 2 ,

then, we can obtain

r k 1 .

Therefore, there exists a positive constant M > m such that μ k M holds for all sufficiently large k . The proof is completed.□

Lemma 3.3 together with (3.4) indicates that the MLM parameter satisfies

(3.23) λ k = μ k F k α M κ b f α x ¯ k x k α .

Hence, the MLM parameter is also bounded above.

3.2 Convergence rate of Algorithm 2.1

By the SVD of J k , we can obtain

(3.24) d k LM = V 1 ( Σ 1 2 + λ k I ) 1 Σ 1 U 1 T F k V 2 ( Σ 2 2 + λ k I ) 1 Σ 2 U 2 T F k ,

(3.25) d k MLM = V 1 ( Σ 1 2 + λ k I ) 1 Σ 1 U 1 T F ( y k ) V 2 ( Σ 2 2 + λ k I ) 1 Σ 2 U 2 T F ( y k ) ,

(3.26) F k + J k d k LM = F k U 1 Σ 1 ( Σ 1 2 + λ k I ) 1 Σ 1 U 1 T F k U 2 Σ 2 ( Σ 2 2 + λ k I ) 1 Σ 2 U 2 T F k = λ k U 1 ( Σ 1 2 + λ k I ) 1 U 1 T F k + λ k U 2 ( Σ 2 2 + λ k I ) 1 U 2 T F k + U 3 U 3 T F k ,

(3.27) F ( y k ) + J k d k MLM = F ( y k ) U 1 Σ 1 ( Σ 1 2 + λ k I ) 1 Σ 1 U 1 T F ( y k ) U 2 Σ 2 ( Σ 2 2 + λ k I ) 1 Σ 2 U 2 T F ( y k ) = λ k U 1 ( Σ 1 2 + λ k I ) 1 U 1 T F ( y k ) + λ k U 2 ( Σ 2 2 + λ k I ) 1 U 2 T F ( y k ) + U 3 U 3 T F ( y k ) .

In the following, we will give the estimations of U 1 U 1 T F k , U 2 U 2 T F k , U 3 U 3 T F k as well as U 1 U 1 T F ( y k ) , U 2 U 2 T F ( y k ) , and U 3 U 3 T F ( y k ) .

Lemma 3.4

Under the conditions of Assumption 3.1, if x k N ( x , b / 2 ) , then we obtain

  1. U 1 U 1 T F k κ b f x ¯ k x k ,

  2. U 2 U 2 T F k 2 κ h j x ¯ k x k 1 + v ,

  3. U 3 U 3 T F k κ h j x ¯ k x k 1 + v ,

where κ b f , κ h j are given in (3.2) and (3.4), respectively.

Proof

We derive the result (a) from (3.4) immediately.

Let s ¯ k = J k + F k , where J k + is the pseudo-inverse of J k . It is easy to see that s ¯ k is the least squares solution of min s R n F k + J k s . Hence, we have from (3.3) that

(3.28) U 3 U 3 T F k = F k + J k s ¯ k F k + J k ( x ¯ k x k ) κ h j x ¯ k x k 1 + v .

Let J ˜ k = U 1 Σ 1 V 1 T and s ˜ k = J ˜ k + F k . Since s ˜ k is the least squares solution of min s R n F k + J ˜ k s , it follows from(3.3) and (3.11)

(3.29) ( U 2 U 2 T + U 3 U 3 T ) F k = F k + J ˜ k s ˜ k F k + J ˜ k ( x ¯ k x k ) F k + J k ( x ¯ k x k ) + ( J ˜ k J k ) ( x ¯ k x k )

κ h j 1 + v x ¯ k x k 1 + v + U 2 Σ 2 V 2 T ( x ¯ k x k ) κ h j 1 + v x ¯ k x k 1 + v + κ h j x ¯ k x k v x ¯ k x k 2 κ h j x ¯ k x k 1 + v .

Due to the orthogonality of U 2 and U 3 , we can obtain result (b).□

Lemma 3.5

Under the conditions of Assumption 3.1, if x k , y k N ( x , b / 2 ) , v > 1 / γ 1 and 1 / γ 1 < α 2 γ v , then we obtain

  1. U 1 U 1 T F ( y k ) c 8 x ¯ k x k min { 1 + α , 1 + v } ,

  2. U 2 U 2 T F ( y k ) c 9 x ¯ k x k min { v + γ + γ α , v + γ + γ v } ,

  3. U 3 U 3 T F ( y k ) c 10 x ¯ k x k min { v + γ + γ α , v + γ + γ v } ,

where c 8 , c 9 , c 10 are positive constants.

Proof

It follows from (3.12), (3.26), and Lemma 3.4 that

(3.30) F k + J k d k LM 4 M κ b f 1 + α σ ¯ r 2 + 3 κ h j x ¯ k x k min { 1 + α , 1 + v } ,

which together with (3.3) and (3.8) imply that

(3.31) F ( y k ) = F ( x k + d k LM ) F k + J k d k LM + κ h j 1 + v d k LM 1 + v 4 M κ b f 1 + α σ ¯ r 2 + 2 κ h j + κ h j 1 + v c 11 x ¯ k x k min { 1 + α , 1 + v } c 8 x ¯ k x k min { 1 + α , 1 + v } ,

for some c 8 , c 11 > 0 , which gives result (a).

From (3.1), we can obtain

(3.32) y ¯ k y k c 1 F ( y k ) γ c 12 x ¯ k x k min { γ ( 1 + α ) , γ ( 1 + v ) } ,

where c 12 is a positive constant.

Let p ¯ k = J k + F k , then p ¯ k is the least squares solution of min p R n F k + J k p . It follows from (3.1), (3.2), and (3.32) that

(3.33) U 3 U 3 T F ( y k ) = F ( y k ) + J k p ¯ k F ( y k ) + J k ( y ¯ k y k ) F ( y k ) + J ( y k ) ( y ¯ k y k ) + ( J k J ( y k ) ) ( y ¯ k y k ) κ h j 1 + v y ¯ k y k 1 + v + κ h j d k LM v y ¯ k y k κ h j 1 + v c 12 x ¯ k x k min { γ ( 1 + α ) ( 1 + v ) , γ ( 1 + v ) 2 } + c 13 x ¯ k x k min { v + γ ( 1 + α ) , v + γ ( 1 + v ) } c 10 x ¯ k x k min { v + γ + γ α , v + γ + γ v } ,

for some positive c 10 and c 13 .

Let J ˜ k = U 1 Σ 1 V 1 T and p ˜ k = J ˜ k + F k . Since p ˜ k is the least squares solution of min p R n F k + J ˜ k p , deducing from (3.2), (3.3), and (3.32), we have

(3.34) ( U 2 U 2 T + U 3 U 3 T ) F ( y k ) = F ( y k ) + J ˜ k p ˜ k F ( y k ) + J ˜ k ( y ¯ k y k )

F ( y k ) + J k ( y ¯ k y k ) + ( J ˜ k J ( y k ) ) ( y ¯ k y k ) κ h j 1 + v y ¯ k y k 1 + v + ( J ˜ k J ( y k ) U 2 Σ 2 V 2 T ) ( y ¯ k y k ) κ h j 1 + v y ¯ k y k 1 + v + ( J ˜ k J ( y k ) ) ( y ¯ k y k ) + U 2 Σ 2 V 2 T ( y ¯ k y k ) κ h j 1 + v y ¯ k y k 1 + v + κ h j d k LM v y ¯ k y k + κ h j x ¯ k x k v y ¯ k y k κ h j 1 + v c 12 x ¯ k x k min { γ ( 1 + α ) ( 1 + v ) , γ ( 1 + v ) 2 } + c 14 x ¯ k x k min { v + γ + γ α , v + γ + γ v } c 9 x ¯ k x k min { v + γ + γ α , v + γ + γ v } ,

for some positive c 9 and c 14 . Due to the orthogonality of U 2 and U 3 , we can obtain result (b).□

Theorem 3.6

Under the conditions of Assumption 3.1, if v > 1 / γ 1 and 1 / γ 1 < α 2 γ v , the sequence { x k } generated by Algorithm 2.1 converges to some solution of (1.1) with order ϑ

(3.35) ϑ = min { γ ( 1 + 2 α ) , γ ( 1 + 2 v ) , γ ( 1 + α + v ) , γ ( v + γ + γ α ) , γ ( v + γ + γ v ) } .

Proof

From (3.6), (3.11), (3.12), (3.25), (3.27), and Lemma 3.5, we have

(3.36) d k MLM = V 1 ( Σ 1 2 + λ k I ) 1 Σ 1 U 1 T F ( y k ) V 2 ( Σ 2 2 + λ k I ) 1 Σ 2 U 2 T F ( y k ) Σ 1 1 U 1 T F ( y k ) + λ k 1 Σ 2 U 2 T F ( y k ) 2 c 8 σ ¯ r x ¯ k x k min { 1 + α , 1 + v } + c 15 x ¯ k x k min γ + γ α + 2 v α γ , γ + γ v + 2 v α γ c 16 x ¯ k x k min 1 + α , 1 + v , γ + γ α + 2 v α γ , γ + γ v + 2 v α γ

and

(3.37) F ( y k ) + J k d k MLM = λ k U 1 ( Σ 1 2 + λ k I ) 1 U 1 T F ( y k ) + λ k U 2 ( Σ 2 2 + λ k I ) 1 U 2 T F ( y k ) + U 3 U 3 T F ( y k ) λ k Σ 1 2 U 1 T F ( y k ) + U 2 T F ( y k ) + U 3 T F ( y k ) 4 M c 8 κ b f α σ ¯ r 2 x ¯ k x k min { 1 + 2 α , 1 + v + α } + ( c 9 + c 10 ) x ¯ k x k min { v + γ + γ α , v + γ + γ v } c 17 x ¯ k x k min { 1 + 2 α , 1 + v + α , v + γ + γ α , v + γ + γ v } ,

where c 15 , c 16 , c 17 are positive constants.

Combining (3.1), (3.2), (3.8)–(3.10), (3.36), and (3.37), we obtain

(3.38) ( c x ¯ k + 1 x k + 1 ) 1 / γ F ( x k + 1 ) = F ( y k + d k MLM ) F ( y k ) + J ( y k ) d k MLM + κ h j 1 + v d k MLM 1 + v F ( y k ) + J k d k MLM + ( J ( y k ) J k ) d k MLM + κ h j 1 + v d k MLM 1 + v c 17 x ¯ k x k min { 1 + 2 α , 1 + v + α , v + γ + γ α , v + γ + γ v } + κ h j d k LM v d k MLM + κ h j 1 + v d k MLM 1 + v c 18 x ¯ k x k min { 1 + 2 α , 1 + 2 v , 1 + α + v , v + γ + γ α , v + γ + γ v } ,

where c 18 is the positive constant.

Therefore,

(3.39) c x ¯ k + 1 x k + 1 c 18 γ x ¯ k x k min { γ ( 1 + 2 α ) , γ ( 1 + 2 v ) , γ ( 1 + α + v ) , γ ( v + γ + γ α ) , γ ( v + γ + γ v ) } .

The proof is completed.□

Corollary 3.7

Under the conditions of Assumption 3.1.

  1. If γ = 1 and v 0 , 1 2 , then

    (3.40) x ¯ k + 1 x k + 1 O ( x ¯ k x k 1 + 2 α ) , if α ( 0 , v ) , O ( x ¯ k x k 1 + 2 v ) , if α [ v , 2 v ] .

  2. If v = 1 and γ 1 2 , 1 , then

    (3.41) x ¯ k + 1 x k + 1 O ( x ¯ k x k γ ( 1 + γ + γ α ) ) , if α 1 γ 1 , 1 , O ( x ¯ k x k γ ( 1 + 2 γ ) ) , if α [ 1 , 2 γ ] .

Particularly, if γ = v = 1 , then

(3.42) x ¯ k + 1 x k + 1 O ( x ¯ k x k 1 + 2 α ) , if α ( 0 , 1 ) , O ( x ¯ k x k 3 ) , if α [ 1 , 2 ] .

Corollary 3.7 indicates that, if γ = v = 1 , i.e., if F ( x ) satisfies the local error bound condition and the Jacobian J ( x ) is Lipschitz continuous, then the sequence { x k } generated by Algorithm 2.1 converges to some solution of (1.1) superlinearly with order 1 + 2 α for any α ( 0 , 1 ) and cubic for any α [ 1 , 2 ] . This coincides with the results in [9].

4 Conclusion

In this article, we study the convergence rate of the MLM method under the Hölderian local error bound condition and the Hölderian continuity of the Jacobian, which are more general than the local error bound condition and the Lipschitz continuity of the Jacobian. If γ = v = 1 , i.e., if F ( x ) satisfies the local error bound condition and the Jacobian J ( x ) is Lipschitz continuous, then the sequence { x k } generated by Algorithm 2.1 converges to some solution of (1.1) superlinearly with order 1 + 2 α for any α ( 0 , 1 ) and cubic for any α [ 1 , 2 ] . This coincides with the results in [9].

Acknowledgements

The authors thank the referees for valuable comments and suggestions, which improved the presentation of this manuscript.

  1. Funding information: This work was supported by the Natural Science Foundation of Education Bureau of Anhui Province (KJ2020A0017, KJ2017A432).

  2. Conflict of interest: The authors state no conflict of interest.

References

[1] C. T. Kelley, Solving nonlinear equations with Newton’s method, Fundamentals of Algorithms, SIAM, Philadelphia, 2003. 10.1137/1.9780898718898Suche in Google Scholar

[2] K. Levenberg, A method for the solution of certain nonlinear problems in least squares, Quart. Appl. Math. 2 (1944), no. 2, 164–166. 10.1090/qam/10666Suche in Google Scholar

[3] D. W. Marquardt, An algorithm for least-squares estimation of nonlinear inequalities, SIAM J. Appl. Math. 11 (1963), no. 2, 431–441, https://doi.org/10.1137/0111030. Suche in Google Scholar

[4] N. Yamashita and M. Fukushima, On the rate of convergence of the Levenberg-Marquardt method, in: G. Alefeld and X. Chen (eds), Topics in Numerical Analysis, Computing Supplementa, vol. 15, Springer, Vienna, 2001, DOI: https://doi.org/10.1007/978-3-7091-6217-0_18. 10.1007/978-3-7091-6217-0_18Suche in Google Scholar

[5] J. Y. Fan and Y. X. Yuan, On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption, Computing 74 (2005), 23–39, https://doi.org/10.1007/s00607-004-0083-1. Suche in Google Scholar

[6] J. Y. Fan, A modified Levenberg-Marquardt algorithm for singular system of nonlinear equations, J. Comput. Math. 21 (2003), no. 5, 625–636. Suche in Google Scholar

[7] J. J. Moré, The Levenberg-Marquardt algorithm: implementation and theory, In: G. A. Watson, (eds), Numerical Analysis. Lecture Notes in Mathematics, vol. 630, Springer, Berlin, Heidelberg, 1978, https://doi.org/10.1007/BFb0067700. Suche in Google Scholar

[8] J. Y. Fan, J. C. Huang, and J. Y. Pan, An adaptive multi-step Levenberg-Marquardt method, J. Sci. Comput. 78 (2019), no. 1, 531–548, https://doi.org/10.1007/s10915-018-0777-8. Suche in Google Scholar

[9] J. Y. Fan, The modified Levenberg-Marquardt method for nonlinear equations with cubic convergence, Math. Comp. 81 (2012), no. 277, 447–466, https://doi.org/10.1090/S0025-5718-2011-02496-8. Suche in Google Scholar

[10] X. Zhao and J. Y. Fan, On the multi-point Levenberg-Marquardt method for singular nonlinear equations, Comput. Appl. Math. 36 (2017), no. 1, 203–223, https://doi.org/10.1007/s40314-015-0221-8. Suche in Google Scholar

[11] L. Chen, A high-order modified Levenberg-Marquardt method for systems of nonlinear equations with fourth-order convergence, Appl. Math. Comput. 285 (2016), 79–93, https://doi.org/10.1016/j.amc.2016.03.031. Suche in Google Scholar

[12] M. Ahookhosh, F. J. Aragn, R. M. T. Fleming, and P. T. Vuong, Local convergence of Levenberg-Marquardt methods under Hölder metric subregularity, Adv. Comput. Math. 45 (2019), no. 5, 2771–2806, https://doi.org/10.1007/s10444-019-09708-7. Suche in Google Scholar

[13] H. Y. Wang and J. Y. Fan, Convergence rate of the Levenberg-Marquardt method under Hölderian local error bound, Optim. Methods Softw. 35 (2020), no. 4, 767–786, https://doi.org/10.1080/10556788.2019.1694927. Suche in Google Scholar

[14] X. D. Zhu and G. H. Lin, Improved convergence results for a modified Levenberg-Marquardt method for nonlinear equations and applications in MPCC, Optim. Methods Softw. 31 (2016), no. 4, 791–804, DOI: https://doi.org/10.1080/10556788.2016.1171863. 10.1080/10556788.2016.1171863Suche in Google Scholar

[15] M. J. D. Powell, Convergence properties of a class of minimization algorithms, Nonlinear Program. 2 (1975), 1–27, https://doi.org/10.1016/B978-0-12-468650-2.50005-5. Suche in Google Scholar

[16] W. Y. Sun and Y. X. Yuan, Optimization Theory and Methods: Nonlinear Programming, Springer, New York, 2006. Suche in Google Scholar

[17] Y. X. Yuan, Recent advances in trust region algorithms, Math. Program. 151 (2015), 249–281, DOI: https://doi.org/10.1007/s10107-015-0893-2. 10.1007/s10107-015-0893-2Suche in Google Scholar

[18] Y. X. Yuan, Recent advances in numerical methods for nonlinear equations and nonlinear least squares, Numer. Algebra Control Optim. 1 (2011), no. 1, 15–34, https://doi.org/10.3934/naco.2011.1.15. Suche in Google Scholar

[19] J. Y. Fan, Accelerating the modified Levenberg-Marquardt method for nonlinear equations, Math. Comput. 83 (2014), no. 287, 1173–1187, https://doi.org/10.1090/S0025-5718-2013-02752-4. Suche in Google Scholar

[20] X. H. Miao, K. Yao, C. Y. Yang, and J. S. Chen, Levenberg-Marquardt method for absolute value equation associated with second-order cone, Numer. Algebra Control Optim. 12 (2022), no. 1, 47–61, https://doi.org/10.3934/naco.2021050. Suche in Google Scholar

[21] H. Y. Wang and J. Y. Fan, Convergence properties of inexact Levenberg-Marquardt method under Hölderian local error bound, J. Ind. Manag. Optim. 17 (2021), no. 4, 2265–2275, https://doi.org/10.3934/jimo.2020068. Suche in Google Scholar

[22] Z. F. Dai, T. Li, and M. Yang, Forecasting stock return volatility: The role of shrinkage approaches in a data-rich environment, J. Forecast. 41 (2022), no. 5, 980–996, https://doi.org/10.1002/for.2841. Suche in Google Scholar

[23] Z. F. Dai and H. Y. Zhu, Time-varying spillover effects and investment strategies between WTI crude oil, natural gas and Chinese stock markets related to belt and road initiative, Energy Econ. 108 (2022), 105883, DOI: https://doi.org/10.1016/j.eneco.2022.105883. 10.1016/j.eneco.2022.105883Suche in Google Scholar

[24] J. Moré, B. Garbow, and K. Hillstrom, Testing unconstrained optimization software, ACM Trans. Math. Software 7 (1981), no. 1, 17–41. 10.2172/6650344Suche in Google Scholar

[25] G. W. Stewart and J. G. Sun, Matrix Perturbation Theory, Computer Science and Scientific Computing, Academic Press, Boston, 1990. Suche in Google Scholar

[26] L. Zheng, L. Chen, and Y. H. Ma, A variant of the Levenberg-Marquardt method with adaptive parameters for systems of nonlinear equations, AIMS Math. 7 (2021), no. 1, 1241–1256, https://doi.org/10.3934/math.2022073. Suche in Google Scholar

Received: 2021-11-21
Revised: 2022-06-03
Accepted: 2022-07-21
Published Online: 2022-09-12

© 2022 Lin Zheng et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Artikel in diesem Heft

  1. Regular Articles
  2. A random von Neumann theorem for uniformly distributed sequences of partitions
  3. Note on structural properties of graphs
  4. Mean-field formulation for mean-variance asset-liability management with cash flow under an uncertain exit time
  5. The family of random attractors for nonautonomous stochastic higher-order Kirchhoff equations with variable coefficients
  6. The intersection graph of graded submodules of a graded module
  7. Isoperimetric and Brunn-Minkowski inequalities for the (p, q)-mixed geominimal surface areas
  8. On second-order fuzzy discrete population model
  9. On certain functional equation in prime rings
  10. General complex Lp projection bodies and complex Lp mixed projection bodies
  11. Some results on the total proper k-connection number
  12. The stability with general decay rate of hybrid stochastic fractional differential equations driven by Lévy noise with impulsive effects
  13. Well posedness of magnetohydrodynamic equations in 3D mixed-norm Lebesgue space
  14. Strong convergence of a self-adaptive inertial Tseng's extragradient method for pseudomonotone variational inequalities and fixed point problems
  15. Generic uniqueness of saddle point for two-person zero-sum differential games
  16. Relational representations of algebraic lattices and their applications
  17. Explicit construction of mock modular forms from weakly holomorphic Hecke eigenforms
  18. The equivalent condition of G-asymptotic tracking property and G-Lipschitz tracking property
  19. Arithmetic convolution sums derived from eta quotients related to divisors of 6
  20. Dynamical behaviors of a k-order fuzzy difference equation
  21. The transfer ideal under the action of orthogonal group in modular case
  22. The multinomial convolution sum of a generalized divisor function
  23. Extensions of Gronwall-Bellman type integral inequalities with two independent variables
  24. Unicity of meromorphic functions concerning differences and small functions
  25. Solutions to problems about potentially Ks,t-bigraphic pair
  26. Monotonicity of solutions for fractional p-equations with a gradient term
  27. Data smoothing with applications to edge detection
  28. An ℋ-tensor-based criteria for testing the positive definiteness of multivariate homogeneous forms
  29. Characterizations of *-antiderivable mappings on operator algebras
  30. Initial-boundary value problem of fifth-order Korteweg-de Vries equation posed on half line with nonlinear boundary values
  31. On a more accurate half-discrete Hilbert-type inequality involving hyperbolic functions
  32. On split twisted inner derivation triple systems with no restrictions on their 0-root spaces
  33. Geometry of conformal η-Ricci solitons and conformal η-Ricci almost solitons on paracontact geometry
  34. Bifurcation and chaos in a discrete predator-prey system of Leslie type with Michaelis-Menten prey harvesting
  35. A posteriori error estimates of characteristic mixed finite elements for convection-diffusion control problems
  36. Dynamical analysis of a Lotka Volterra commensalism model with additive Allee effect
  37. An efficient finite element method based on dimension reduction scheme for a fourth-order Steklov eigenvalue problem
  38. Connectivity with respect to α-discrete closure operators
  39. Khasminskii-type theorem for a class of stochastic functional differential equations
  40. On some new Hermite-Hadamard and Ostrowski type inequalities for s-convex functions in (p, q)-calculus with applications
  41. New properties for the Ramanujan R-function
  42. Shooting method in the application of boundary value problems for differential equations with sign-changing weight function
  43. Ground state solution for some new Kirchhoff-type equations with Hartree-type nonlinearities and critical or supercritical growth
  44. Existence and uniqueness of solutions for the stochastic Volterra-Levin equation with variable delays
  45. Ambrosetti-Prodi-type results for a class of difference equations with nonlinearities indefinite in sign
  46. Research of cooperation strategy of government-enterprise digital transformation based on differential game
  47. Malmquist-type theorems on some complex differential-difference equations
  48. Disjoint diskcyclicity of weighted shifts
  49. Construction of special soliton solutions to the stochastic Riccati equation
  50. Remarks on the generalized interpolative contractions and some fixed-point theorems with application
  51. Analysis of a deteriorating system with delayed repair and unreliable repair equipment
  52. On the critical fractional Schrödinger-Kirchhoff-Poisson equations with electromagnetic fields
  53. The exact solutions of generalized Davey-Stewartson equations with arbitrary power nonlinearities using the dynamical system and the first integral methods
  54. Regularity of models associated with Markov jump processes
  55. Multiplicity solutions for a class of p-Laplacian fractional differential equations via variational methods
  56. Minimal period problem for second-order Hamiltonian systems with asymptotically linear nonlinearities
  57. Convergence rate of the modified Levenberg-Marquardt method under Hölderian local error bound
  58. Non-binary quantum codes from constacyclic codes over 𝔽q[u1, u2,…,uk]/⟨ui3 = ui, uiuj = ujui
  59. On the general position number of two classes of graphs
  60. A posteriori regularization method for the two-dimensional inverse heat conduction problem
  61. Orbital stability and Zhukovskiǐ quasi-stability in impulsive dynamical systems
  62. Approximations related to the complete p-elliptic integrals
  63. A note on commutators of strongly singular Calderón-Zygmund operators
  64. Generalized Munn rings
  65. Double domination in maximal outerplanar graphs
  66. Existence and uniqueness of solutions to the norm minimum problem on digraphs
  67. On the p-integrable trajectories of the nonlinear control system described by the Urysohn-type integral equation
  68. Robust estimation for varying coefficient partially functional linear regression models based on exponential squared loss function
  69. Hessian equations of Krylov type on compact Hermitian manifolds
  70. Class fields generated by coordinates of elliptic curves
  71. The lattice of (2, 1)-congruences on a left restriction semigroup
  72. A numerical solution of problem for essentially loaded differential equations with an integro-multipoint condition
  73. On stochastic accelerated gradient with convergence rate
  74. Displacement structure of the DMP inverse
  75. Dependence of eigenvalues of Sturm-Liouville problems on time scales with eigenparameter-dependent boundary conditions
  76. Existence of positive solutions of discrete third-order three-point BVP with sign-changing Green's function
  77. Some new fixed point theorems for nonexpansive-type mappings in geodesic spaces
  78. Generalized 4-connectivity of hierarchical star networks
  79. Spectra and reticulation of semihoops
  80. Stein-Weiss inequality for local mixed radial-angular Morrey spaces
  81. Eigenvalues of transition weight matrix for a family of weighted networks
  82. A modified Tikhonov regularization for unknown source in space fractional diffusion equation
  83. Modular forms of half-integral weight on Γ0(4) with few nonvanishing coefficients modulo
  84. Some estimates for commutators of bilinear pseudo-differential operators
  85. Extension of isometries in real Hilbert spaces
  86. Existence of positive periodic solutions for first-order nonlinear differential equations with multiple time-varying delays
  87. B-Fredholm elements in primitive C*-algebras
  88. Unique solvability for an inverse problem of a nonlinear parabolic PDE with nonlocal integral overdetermination condition
  89. An algebraic semigroup method for discovering maximal frequent itemsets
  90. Class-preserving Coleman automorphisms of some classes of finite groups
  91. Exponential stability of traveling waves for a nonlocal dispersal SIR model with delay
  92. Existence and multiplicity of solutions for second-order Dirichlet problems with nonlinear impulses
  93. The transitivity of primary conjugacy in regular ω-semigroups
  94. Stability estimation of some Markov controlled processes
  95. On nonnil-coherent modules and nonnil-Noetherian modules
  96. N-Tuples of weighted noncommutative Orlicz space and some geometrical properties
  97. The dimension-free estimate for the truncated maximal operator
  98. A human error risk priority number calculation methodology using fuzzy and TOPSIS grey
  99. Compact mappings and s-mappings at subsets
  100. The structural properties of the Gompertz-two-parameter-Lindley distribution and associated inference
  101. A monotone iteration for a nonlinear Euler-Bernoulli beam equation with indefinite weight and Neumann boundary conditions
  102. Delta waves of the isentropic relativistic Euler system coupled with an advection equation for Chaplygin gas
  103. Multiplicity and minimality of periodic solutions to fourth-order super-quadratic difference systems
  104. On the reciprocal sum of the fourth power of Fibonacci numbers
  105. Averaging principle for two-time-scale stochastic differential equations with correlated noise
  106. Phragmén-Lindelöf alternative results and structural stability for Brinkman fluid in porous media in a semi-infinite cylinder
  107. Study on r-truncated degenerate Stirling numbers of the second kind
  108. On 7-valent symmetric graphs of order 2pq and 11-valent symmetric graphs of order 4pq
  109. Some new characterizations of finite p-nilpotent groups
  110. A Billingsley type theorem for Bowen topological entropy of nonautonomous dynamical systems
  111. F4 and PSp (8, ℂ)-Higgs pairs understood as fixed points of the moduli space of E6-Higgs bundles over a compact Riemann surface
  112. On modules related to McCoy modules
  113. On generalized extragradient implicit method for systems of variational inequalities with constraints of variational inclusion and fixed point problems
  114. Solvability for a nonlocal dispersal model governed by time and space integrals
  115. Finite groups whose maximal subgroups of even order are MSN-groups
  116. Symmetric results of a Hénon-type elliptic system with coupled linear part
  117. On the connection between Sp-almost periodic functions defined on time scales and ℝ
  118. On a class of Harada rings
  119. On regular subgroup functors of finite groups
  120. Fast iterative solutions of Riccati and Lyapunov equations
  121. Weak measure expansivity of C2 dynamics
  122. Admissible congruences on type B semigroups
  123. Generalized fractional Hermite-Hadamard type inclusions for co-ordinated convex interval-valued functions
  124. Inverse eigenvalue problems for rank one perturbations of the Sturm-Liouville operator
  125. Data transmission mechanism of vehicle networking based on fuzzy comprehensive evaluation
  126. Dual uniformities in function spaces over uniform continuity
  127. Review Article
  128. On Hahn-Banach theorem and some of its applications
  129. Rapid Communication
  130. Discussion of foundation of mathematics and quantum theory
  131. Special Issue on Boundary Value Problems and their Applications on Biosciences and Engineering (Part II)
  132. A study of minimax shrinkage estimators dominating the James-Stein estimator under the balanced loss function
  133. Representations by degenerate Daehee polynomials
  134. Multilevel MC method for weak approximation of stochastic differential equation with the exact coupling scheme
  135. Multiple periodic solutions for discrete boundary value problem involving the mean curvature operator
  136. Special Issue on Evolution Equations, Theory and Applications (Part II)
  137. Coupled measure of noncompactness and functional integral equations
  138. Existence results for neutral evolution equations with nonlocal conditions and delay via fractional operator
  139. Global weak solution of 3D-NSE with exponential damping
  140. Special Issue on Fractional Problems with Variable-Order or Variable Exponents (Part I)
  141. Ground state solutions of nonlinear Schrödinger equations involving the fractional p-Laplacian and potential wells
  142. A class of p1(x, ⋅) & p2(x, ⋅)-fractional Kirchhoff-type problem with variable s(x, ⋅)-order and without the Ambrosetti-Rabinowitz condition in ℝN
  143. Jensen-type inequalities for m-convex functions
  144. Special Issue on Problems, Methods and Applications of Nonlinear Analysis (Part III)
  145. The influence of the noise on the exact solutions of a Kuramoto-Sivashinsky equation
  146. Basic inequalities for statistical submanifolds in Golden-like statistical manifolds
  147. Global existence and blow up of the solution for nonlinear Klein-Gordon equation with variable coefficient nonlinear source term
  148. Hopf bifurcation and Turing instability in a diffusive predator-prey model with hunting cooperation
  149. Efficient fixed-point iteration for generalized nonexpansive mappings and its stability in Banach spaces
Heruntergeladen am 6.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/math-2022-0485/html
Button zum nach oben scrollen