Home A self-adaptive inertial extragradient method for a class of split pseudomonotone variational inequality problems
Article Open Access

A self-adaptive inertial extragradient method for a class of split pseudomonotone variational inequality problems

  • Abd-Semii Oluwatosin-Enitan Owolabi , Timilehin Opeyemi Alakoya and Oluwatosin Temitope Mewomo EMAIL logo
Published/Copyright: April 27, 2023

Abstract

In this article, we study a class of pseudomonotone split variational inequality problems (VIPs) with non-Lipschitz operator. We propose a new inertial extragradient method with self-adaptive step sizes for finding the solution to the aforementioned problem in the framework of Hilbert spaces. Moreover, we prove a strong convergence result for the proposed algorithm without prior knowledge of the operator norm and under mild conditions on the control parameters. The main advantages of our algorithm are: the strong convergence result obtained without prior knowledge of the operator norm and without the Lipschitz continuity condition often assumed by authors; the minimized number of projections per iteration compared to related results in the literature; the inertial technique employed, which speeds up the rate of convergence; and unlike several of the existing results in the literature on VIPs with non-Lipschitz operators, our method does not require any linesearch technique for its implementation. Finally, we present several numerical examples to illustrate the usefulness and applicability of our algorithm.

MSC 2010: 65K15; 47J25; 65J15; 90C33

1 Introduction

Let C be a nonempty, closed, and convex subset of a real Hilbert space H with induced norm and inner product , . The variational inequality problem (VIP) for f on C is defined as follows:

(1) find x ˆ C such that f x ˆ , x x ˆ 0 , x C

If f is monotone, the problem (1) is known as a monotone VIP, while it is known as a pseudo-monotone VIP if f is a pseudo-monotone. We denote the solution set of VIP (1) by VI ( C , f ) . In the early 1960s, Stampacchia [1] and Fichera [2] independently introduced the theory of VIP. The VIP is a fundamental problem that has a wide range of applications in the applied field of mathematics, such as network equilibrium problems, complementarity problems, optimization theory, and systems of nonlinear equations (see [3,4]). As a result of its wide applications, several authors have proposed many iterative algorithms for approximating the solution of VIP and related optimization problems, (see [511]) and the references therein. The VIP is widely known to be equivalent to the following fixed point equation:

(2) x = P C ( I λ f ) x ,

for λ > 0 , where P C is the metric projection from H onto C .

A simple iterative formula that is an extension of (2) is the projection gradient method presented as follows:

(3) x n + 1 = P C ( I λ f ) x n ,

where λ 0 , 2 α L 2 and f : H H is α -strongly monotone and L -Lipschitz continuous. It is known that Algorithm 3 only converges weakly under some strict conditions that the operator f is either strongly monotone or inverse strongly monotone but fails to converge if f is monotone.

In order to overcome this barrier, a famous method called the extragradient method (EgM) was introduced by Korpelevich [12] for solving VIP in finite dimensional Euclidean spaces, which is defined as follows:

(4) y n = P C ( x n λ f x n ) , x n + 1 = P C ( x n λ f y n ) , n 1 ,

where f is monotone and a Lipschitz continuous, λ 0 , 1 L , and C R n is a closed convex set. If the solution set VI ( C , f ) is nonempty, then the sequence { x n } generated by the EgM converges to an element in VI ( C , f ) .

In recent years, the EgM has received great attention from numerous authors who have improved it in various ways (see, for instance, [1315]). It is observed that the EgM requires the computation of two projections onto the closed convex set C per iteration. However, projection onto an arbitrary closed convex set C is often very difficult to compute. In order to overcome this barrier, authors have developed more efficient iterative algorithms; some of these algorithms are discussed below.

In 2000, Tseng [16] proposed the following iterative scheme known as the Tseng’s extragradient method (TEgM):

Algorithm 1.1

(5) x 0 H , y n = P C ( x n λ f x n ) , x n + 1 = y n λ ( f y n f x n ) ,

where A is a monotone and a Lipschitz continuous operator and λ 0 , 1 L . Clearly, the TEgM requires one projection to be computed per iteration and hence has an advantage in computing projection over the EgM.

Furthermore, Censor et al. [8] introduced a new method that involves the modification of one of the projections in the EgM by replacing it with a projection onto an half space. This method is called the subgradient extragradient method (SEgM) and is defined as follows:

Algorithm 1.2

(SEgM)

(6) x 0 H , y n = P C ( x n λ f x n ) , T n = { z H : x n λ f x n y n , z y n 0 } , x n + 1 = P T n ( x n λ f y n ) .

Censor et al. [8,9] proved that provided the solution set VI ( C , f ) is nonempty, the sequence { x n } generated by the SEgM converges weakly to an element p VI ( C , f ) , where p = lim n P VI ( C , A ) x n .

Also, Maingé and Gobinddass [17] obtained a result that relates to a weak convergence algorithm by using only a single projection by means of a projected reflected gradient-type method [14] and an inertial term for finding the solution of VIP in real Hilbert spaces.

Another related problem is the fixed point problem (FPP). Let S : C C be a nonlinear mapping. A point p C is called a fixed point of S if T p = p . We denote by F ( S ) , the set of fixed points of S , that is,

(7) F ( S ) = { p C : S p = p } .

Many of the problems in sciences and engineering can be formulated as finding the solution of FPP of a nonlinear operator.

Recently, Thong and Hieu [18] introduced the following viscosity-type subgradient extragradient algorithm for approximating a common solution of VIP and FPP in Hilbert spaces:

Algorithm 1.3

Let x 1 H , λ 1 > 0 and μ ( 0 , 1 ) . Compute x n + 1 as follows:

Step 1. Calculate

y n = P C ( x n λ n f x n ) .

Step 2. Compute

z n = P T n ( x n λ n f y n ) ,

where T n = { x H : x n λ n f x n y n , x y n 0 } .

Step 3. Compute

x n + 1 = α n f ( x n ) + ( 1 α n ) [ ( 1 β n ) z n + β n S z n ]

and

λ n + 1 = min μ x n y n f x n f y n , if f x n f y n 0 , λ n , otherwise .

Set n n + 1 and go to Step 1.

S : H H is a demicontractive mapping such that I S is demiclosed at zero, A : H H is monotone and Lipschitz continuous, and f : H H is a contraction.

Censor et al. in [7] introduced another problem called the split variational inequality problem (SVIP). The SVIP, which is a more general problem than the VIP, is formulated as follows: Find x C such that

(8) f x , y x 0 , y C

and

(9) g ( A x ) , z A x 0 , z Q ,

where C and Q are nonempty, closed, and convex subsets of real Hilbert spaces H 1 and H 2 , respectively; f and g are nonlinear mappings on C and Q , respectively; and A : H 1 H 2 is a bounded linear operator. Observe that the SVIP can be viewed as a pair of VIPs in which the image of the solution of one VIP in a space H 1 under a given bounded linear operator T is a solution of another VIP in another space H 2 .

The following algorithm was introduced by Censor et al. [7] for solving SVIP (equations (8) and (9)):

(10) x n + 1 = P C ( I λ f ) ( x n + τ A ( P Q ( I λ g ) I ) A x n ) , n N ,

and the authors proved the following convergence theorem:

Theorem 1.4

Let A : H 1 H 2 be a bounded linear operator and f : H 1 H 1 and g : H 2 H 2 , be, respectively, α 1 - and α 2 -inverse strongly monotone operators with α min { α 1 , α 2 } . Assume that SVIP (equations (8) and (9)) is consistent, τ 0 , 1 L with L being the spectral radius of the operator A A , λ ( 0 , 2 α ) , and suppose that for all x solving SVIP (equations (8) and (9)),

(11) f ( x ) , P C ( I λ f ) ( x ) x 0 , x H 1 .

Then the sequence { x n } generated by (10) converges weakly to a solution of SVIP (equations (8) and (9)).

It is clear that Algorithm 10 fully exploits the splitting structure of the SVIP (equations (8) and (9)). However, the weak convergence of this method was proved under some strong assumptions, such as assumption (11) and the fact that both mappings are required to be co-coercive (inverse strongly monotone). It is worth mentioning that assumption (11), which depends on the averaged operator technique, has been dispensed with by other authors for solving the SVIP and related problems (see, e.g., [1922]), but their methods also relied on the co-coercivity of the cost operators.

In order to overcome some of these weaknesses, He et al. [23] proposed an easily implementable relaxed projection method, which fully exploits the splitting structure of the SVIP, for solving the SVIP (equations (8) and (9)) when the underlying operators are monotone and Lipschitz continuous in finite dimensional spaces. However, this method still requires the reformulation of the original problem into a VIP in a product space (for more details, see [23]).

Tian and Jiang in [24] studied a more general class of SVIP. Precisely, the authors investigated the following class of SVIP: find x C such that

(12) f x , y x 0 y C and A x F ( S ) ,

where f : C H 1 is monotone and Lipschitz continuous, S : H 2 H 2 is a nonexpansive mapping, and A : H 1 H 2 is a bounded linear operator.

Moreover, the authors proposed the following algorithm for approximating the solution of problem (12):

(13) y n = P C ( x n τ n A ( I S ) A x n ) , t n = P C ( y n λ n f ( y n ) ) , x n + 1 = P C ( y n λ n f ( t n ) ) , n 1 .

In addition, they proved the following convergence theorem:

Theorem 1.5

Let H 1 and H 2 be real Hilbert spaces, and let C be a nonempty closed convex subset of H 1 . Let A : H 1 H 2 be a bounded linear operator such that A 0 , and S : H 2 H 2 be a nonexpansive mapping. Let f : C H 1 be a monotone and L-Lipschitz continuous mapping. Suppose that Γ = { z VI ( C , f ) : A z F ( S ) } and the sequences { x n } is defined for arbitrary x 1 C by (13), where { τ n } [ a , b ] for some a , b 0 , 1 A 2 and { λ n } [ c , d ] for some c , d 0 , 1 L . Then { x n } converges weakly.

It is clear that the class of SVIP (12) considered by Tian and Jiang [24] generalizes the class of SVIP (equations (8) and (9)) considered by Censor [7]. However, we observe that the result of Tian and Jiang [24] is only applicable when the associated cost operator f is monotone and Lipschitz continuous and S is nonexpansive. Moreover, the implementation of the proposed Algorithm (13) by the authors requires knowledge of the Lipschitz constant of the cost operator f and prior knowledge of the operator norm A . In several instances, these parameters are unknown or difficult to estimate, which can hinder the implementation of their proposed algorithm. In spite of all these stringent conditions, the authors were only able to obtain a weak convergence result for their proposed algorithm. It is known that in solving optimization problems, strong convergence results are more applicable and therefore more desirable than weak convergence results.

In order to remedy some of the above limitations, Ogwo et al. [25] proposed and analyzed the convergence of the following algorithm for solving SVIP (12) when the underlying operator f is pseudomonotone (a weaker assumption than the monotone assumption) and Lipschitz continuous, and T is strictly pseudocontractive mapping:

Algorithm 1.6

Initialization: Let γ > 0 , l , μ ( 0 , 1 ) and x 1 H 1 be given arbitrary.

Iterative Steps: Calculate x n + 1 as follows:

Step 1. Compute

w n = P C ( x n τ n A ( I T β ) A x n ) , and y n = P C ( w n λ n f w n ) ,

where 0 < a τ n b < 1 A 2 , T β = β I + ( 1 β ) S , and λ n is chosen to be the largest λ { γ , γ ł , γ ł 2 , } satisfying

λ n f w n f y n μ w n y n .

Step 2. Compute

x n + 1 = α n g ( x n ) + ( 1 α n ) z n ,

z n = y n λ n ( f y n f w n ) .

Set n n + 1 and go back to Step 1.

f : H 1 H 1 is a pseudo-monotone, Lipschitz continuous, and sequentially weakly continuous operator on bounded subsets of H 1 , g : H 1 H 1 is a contraction mapping with constant ρ ( 0 , 1 ) , A : H 1 H 2 is a bounded linear operator, and S : H 2 H 2 is a κ -strictly pseudocontractive mapping with κ [ 0 , 1 ) . Moreover, the authors proved the following strong convergence theorem:

Theorem 1.7

Let { x n } be a sequence generated by Algorithm 1.6. Assume that lim n α n = 0 and n = 1 α n = + . Then { x n } converges strongly to z = P Γ g ( z ) , where Γ = { z VI ( C , f ) : A z F ( S ) } .

We observe that while the result of Ogwo et al. [25] is an improvement over the result of Tian and Jiang [24], the following drawbacks are identified in their result: (1) their result is not applicable when the cost operator f is non-Lipschitz and/or not sequentially weakly continuous, and the operator T is more general than strict pseudocontractions; (2) the proposed algorithm involves linesearch technique, which could be computationally expensive to implement due to its loop nature; (3) implementation of their proposed algorithm requires knowledge of the operator norm.

One of our goals in this article is to remedy the above drawbacks. More precisely, we propose a new iterative method for approximating the solution SVIP (12) with the following features:

  1. Unlike the result of Ogwo et al. [25] and Tian and Jiang [24], our proposed algorithm is applicable when the cost operator f is a non-Lipschitz pseudomonotone operator and T is a quasi-pseudocontraction. Moreover, our method does not require the weakly sequentially continuity condition assumed in [25] and in several other existing results in the literature on VIP with a pseudomonotone operator.

  2. Our proposed algorithm does not involve any linesearch technique. It uses a simple but efficient self-adaptive step size technique that generates a nonmonotonic sequence of step sizes.

  3. The implementation of our proposed algorithm does not require knowledge of the operator norm.

  4. Unlike the result of Tian and Jiang [24] and several other results in the literature on SVIP, our method requires evaluating a minimal number of projections per iteration.

  5. Our method employs the inertial technique to accelerate the rate of convergence.

  6. In addition, the sequence generated by our proposed algorithm converges strongly to the solution of the SVIP (12).

Finally, we present some applications and numerical examples to illustrate the usefulness and efficiency of our proposed method in comparison with some related methods in the literature.

Subsequent sections of this article are organized as follows: in Section 2, we recall some basic definitions and lemmas that are relevant in establishing our main result; in Section 3, we present our proposed method, while in Section 4, we first establish some lemmas that are useful in establishing the strong convergence result of our proposed algorithm and then prove the strong convergence theorem for the algorithm; in Section 5, we present some numerical examples to illustrate the performance of our method and compare it with some related methods in the literature, and finally, Section 6, we give a concluding remark.

2 Preliminaries

In this section, we recall some basic lemmas and definitions required to establish our results. We denote by x n x and x n x the weak and strong convergence, respectively, of a sequence { x n } in H to a point x H . Let C be a nonempty, closed, and convex subset of a real Hilbert space H . The metric projection [26] P C : H C is defined for each x H , as the unique element P C x C such that

x P C x = inf { x z : z C } .

The operator P C is nonexpansive and has the following properties [27,28]:

  1. for all x , y C , we have

    P C x P C y 2 P C x P C y , x y ;

  2. for any x H , z = P C x if and only if

    x z , z y 0 y , z C ;

  3. for any x H and y C , we have

    P C x y 2 + x P C x 2 x y 2 .

Definition 2.1

Let T : C C be a mapping. Then, T is called

  1. L -Lipschitz continuous, if there exists a constant L > 0 such that

    T x T y L x y x , y C ;

    If L [ 0 , 1 ) , then T is a contraction mapping and it is nonexpansive if L = 1 ;

  2. quasi-nonexpansive, if F ( T ) and

    T x p x p x C and p F ( T ) ;

  3. k -strictly pseudocontractive mapping, if there exists κ [ 0 , 1 ) such that

    T x T y 2 x y 2 + κ ( I T ) x ( I T ) y 2 x , y C ;

    if k = 1 , then T is pseudocontractive;

  4. monotone, if

    T x T y , x y 0 x , y C ;

  5. α -inverse strongly monotone ( α -ism) (or α -co-coercive), if there exists α > 0 such that

    T x T y , x y α T x T y 2 x , y C ;

    if α = 1 , T is firmly nonexpansive;

  6. β -strongly monotone, if there exists β > 0 such that

    T x T y , x y β x y 2 x , y C ;

  7. pseudomonotone, if

    T x , y x 0 T y , y x 0 x , y C ;

  8. α -averaged, if T = ( 1 α ) I + α S , where α ( 0 , 1 ) and S : C C is nonexpansive, see [29];

  9. uniformly continuous, if for every ε > 0 , there exists δ = δ ( ε ) > 0 such that

    T x T y < ε whenever x y < δ x , y C .

In this connection, see Proposition 11.2 on page 42 of [30] and the early article by Bruck and Reich [31]. It is known that firmly nonexpansive mappings are 1 2 -averaged while averaged mappings are nonexpansive. Also, every α -inverse strongly monotone mapping is 1 α -Lipschitz continuous. Moreover, if f is α -strongly monotone and L -Lipschitz continuous, then f is α L 2 -ism. Furthermore, both α -strongly monotone and α -inverse strongly monotone mappings are monotone, while monotone mappings are pseudomonotone. However, the converse is not true. For instance, the mapping f : ( 0 , ) ( 0 , ) defined by f x = 1 1 + x is pseudomonotone but not monotone. In addition, we note that uniform continuity is a weaker notion than Lipschitz continuity. For more examples on pseudomonotone operators that are not monotone, check [32,33].

Definition 2.2

An operator T : C C is said to be quasi-pseudocontractive, if F ( T ) and

T x p 2 x p 2 + T x x 2 , x C , p F ( T ) .

Clearly, the class of quasi-pseudocontractive mappings includes the class of pseudo-contractive mappings with nonempty fixed points set, and it contains several other classes of mappings.

It is well known that if D is a convex subset of H , then T : D H is uniformly continuous if and only if, for every ε > 0 , there exists a constant K < + such that

(14) T x T y K x y + ε x , y D .

Lemma 2.3

[30] Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Given x H and z C . Then, z = P C x x z , z y 0 y C .

Lemma 2.4

[25] Let { α n } be a sequence of nonnegative real numbers satisfying

a n + 1 ( 1 α n ) a n + α n λ n + δ n , n 0 ,

where { α n } , { λ n } , and { δ n } satisfy the following conditions:

  1. { α n } [ 0 , 1 ] , n = 0 α n = ;

  2. limsup n λ n 0 ;

  3. δ n 0 ( n 0 ) , n = 0 δ n < .

Then, lim n a n = 0 .

Lemma 2.5

[34] Let H be a real Hilbert space and S : H H be a mapping with L 1 . Denote

T ( 1 Φ ) I + Φ S ( ( I η ) I + η S ) .

If 0 < Φ < η < 1 1 + 1 + L 2 , then the following hold:

  1. F ( S ) = F ( S ( ( I η ) I + η T ) ) = F ( T ) ;

  2. If I S is demiclosed at zero, then I T is also demiclosed at zero;

  3. In addition, if S : H H is quasi-pseudocontractive mapping, then the mapping T is quasi-nonexpansive.

Lemma 2.6

[25,35] Let H be a real Hilbert space, then, for all x , y H and β R , the following hold:

  1. β x + ( 1 β ) y 2 = β x 2 + ( 1 β ) y 2 β ( 1 β ) x y 2 ;

  2. x + y 2 x 2 + 2 y , x y ;

  3. x + y 2 = x 2 + 2 x , y + y 2 .

Lemma 2.7

[36] Let { Γ n } be a sequence of real numbers that does not decrease at infinity, in the sense that there exists a subsequence { Γ n j } j 0 of { Γ n } such that

(15) Γ n j < Γ n j + 1 j 0 .

Furthermore, consider the sequence of integers { τ ( n ) } n n 0 defined by

(16) τ ( n ) = max { k n Γ k < Γ k + 1 } .

Then, { Γ n } n n 0 is a nondecreasing sequence such that τ ( n ) as n 0 , and for all n n 0 , the following hold:

(17) Γ τ ( n ) Γ τ ( n ) + 1 , Γ n Γ τ ( n ) + 1 .

Lemma 2.8

[37] Suppose { λ n } and { θ n } are two nonnegative real sequences such that

λ n + 1 λ n + ϕ n n 1 .

If n = 1 ϕ n < , then lim n λ n exists.

Lemma 2.9

[38] Let H be a real Hilbert space and T : H H be a nonexpansive mapping with F ( T ) . If { x n } is a sequence in H converging weakly to x and if { ( I T ) x n } converges strongly to y , then ( I T ) x = y .

Lemma 2.10

[39] Let C be a nonempty, closed, and convex of real Hilbert space H. Let operator f : C H be continuous and pseudomonotone. Then, x is a solution of VI ( C , f ) if and only if f x , x x 0 x C .

3 Proposed algorithm

In this section, we present our proposed algorithm. Let H 1 and H 2 be real Hilbert spaces, and let C be a nonempty, closed and convex subset of H 1 . Suppose g : H 1 H 1 is a contraction mapping with constant ρ ( 0 , 1 ) , A : H 1 H 2 is a bounded linear operator, and S : H 2 H 2 is a quasi-pseudocontractive mapping such that I S is demiclosed at zero. We assume that the solution set Γ = { z VI ( C , f ) : A z F ( S ) } .

We establish the strong convergence of our proposed algorithm under the following conditions:

  1. { α n } ( 0 , 1 ) , lim n α n = 0 , n = 0 α n = + ;

  2. The operator f : H 1 H 1 is pseudomonotone and uniformly continuous on H 1 and satisfies the following condition:

    1. Whenever { x n } C and x n z , one has f z liminf n f x n ;

  3. { ε n } is a positive sequence such that lim n ε n α n = 0 ;

  4. Let { ψ n } be a nonnegative sequence such that n = 1 ψ n < + .

Now we present our proposed algorithm as follows:

Algorithm 3.1

Step 0: Select x 0 , x 1 H , λ 1 > 0 , μ ( 0 , 1 ) , and 0 < ϕ 1 ϕ n ϕ 2 < 1 and set n = 1 .

Step 1: Given the ( n 1 )th and nth iterates, choose θ n such that 0 θ n θ ˜ n with θ ˜ n defined as follows:

(18) θ ˜ n = min θ , ε n x n x n 1 , if x n x n 1 , θ , otherwise.

Step 2: Compute

w n = x n + θ n ( x n x n 1 ) .

Step 3: Compute

(19) z n = P C ( w n τ n A ( I T ) A w n ) ,

where

T = ( 1 η ) I + η S ( ( 1 ξ ) I + ξ S ) .

Step 4: Compute

(20) y n = P C ( z n λ n f z n ) .

Step 5: Compute

u n = y n λ n ( f y n f z n ) ,

(21) x n + 1 = α n g ( w n ) + ( 1 α n ) u n ,

where

(22) λ n + 1 = min μ z n y n f z n f y n , λ n + ψ n , f z n f y n 0 , λ n + ψ n , otherwise

and

τ n ϕ n ( I T ) A w n 2 A ( I T ) A w n 2 , if A w n T A w n , τ , otherwise ( τ being any nonnegative real number ) .

Set n n + 1 and go to Step 1.

Remark 3.2

  1. By conditions (C1) and (C3), it follows from (63) that

    (23) lim n θ n x n x n 1 = 0 and lim n θ n α n x n x n 1 = 0 .

  2. Observe that by Lemma 2.5, the mapping T is quasi-nonexpansive and I T is demiclosed at zero.

Remark 3.3

  1. We point out that condition (C2)(a) is a much weaker assumption than the sequential weakly continuity assumption used in several of the existing results in the literature.

  2. Observe that while the pseudomonotone operator f is not necessarily Lipschitz, our proposed method does not require any linesearch technique but uses a simple step size rule in (67), which generates a nonmonotonic sequence of step sizes. The step size is constructed such that it reduces the dependence of the algorithm on the initial step size λ 1 .

4 Convergence analysis

In this section, we analyze the convergence of our proposed algorithm. First, we establish some lemmas required to prove our strong convergence theorem.

Lemma 4.1

Let { λ n } be the sequence of step sizes generated by Algorithm 3.1. Then, { λ n } is well defined and lim n λ n = λ min μ N , λ 1 , λ 1 + Ψ , where Ψ = n = 1 ψ n and for some N > 0 .

Proof

Since f is uniformly continuous, then by (14) it follows that for any given ε > 0 , there exists K < + such that f z n f y n K z n y n + ε . Thus, for the case f z n f y n 0 for all n 1 , we have

μ z n y n f z n f y n μ z n y n K z n y n + ε = μ z n y n ( K + ε 1 ) z n y n = μ N ,

where ε = ε 1 z n y n for some ε 1 ( 0 , 1 ) and N = K + ε 1 . Therefore, by the definition of λ n + 1 , the sequence { λ n } has lower bound min { μ N , λ 1 } and has upper bound λ 1 + Ψ . By Lemma 2.8, the limit lim n λ n exists and denoted by λ = lim n λ n . Clearly, λ [ min { μ N , λ 1 } , λ 1 + Ψ ] .

Lemma 4.2

Let { x n } be a sequence generated by Algorithm 3.1. Then, { x n } is bounded. Furthermore, if lim n α n = 0 , then lim n x n + 1 u n = 0 .

Proof

By (67), we have

λ n + 1 = min μ z n y n f z n f y n , λ n + ψ n μ z n y n f z n f y n ,

which implies that

(24) f z n f y n μ λ n + 1 z n y n n 1 .

Let q Γ . By applying the triangle inequality, we obtain

(25) w n q = x n + θ n ( x n x n 1 ) q x n q + θ n x n x n 1 = x n q + α n θ n α n x n x n 1 .

Since, by (23), lim n θ n α n x n x n 1 = 0 , then there exists a constant M 1 > 0 such that θ n α n x n x n 1 M 1 n 1 . Hence, it follows from (25) that

(26) w n q x n q + α n M 1 .

By applying Lemma 2.6(iii) and Cauchy-Schwarz inequality, we obtain

(27) w n p 2 = x n θ n ( x n x n 1 ) p 2 = x n p 2 + θ n 2 x n x n 1 2 + 2 θ n x n p , x n x n 1 x n p 2 + θ n 2 x n x n 1 2 + 2 θ n x n p x n x n 1 = x n p 2 + θ n x n x n 1 ( 2 x n p + θ n x n x n 1 ) x n p 2 + 3 M α n θ n α n x n x n 1 ,

where M sup n N { x n p , θ x n x n 1 } .

Next, by the definition of y n with the firmly nonexpansivity of P C , we obtain

y n q 2 P C ( z n λ n f z n ) q 2 y n q , ( z n λ n f z n ) q = 1 2 ( y n q 2 + z n λ n f z n q 2 y n z n + λ n f z n 2 ) = 1 2 ( y n q 2 + z n q 2 + λ n 2 f z n 2 2 λ n z n q , f z n ) 1 2 ( y n z n 2 + λ n 2 f z n 2 + 2 λ n y n z n , f z n ) = 1 2 ( y n q 2 + z n q 2 y n z n 2 2 λ n y n q , f z n ) ,

which implies that

(28) y n q 2 z n q 2 y n z n 2 2 λ n y n q , f z n .

In addition, by (24) and (28), we have

(29) u n q 2 = y n λ n ( f y n f z n ) q 2 = y n q 2 2 λ n y n q , f y n f z n + λ n 2 f y n f z n 2 z n q 2 y n z n 2 2 λ n y n q , f z n 2 λ n y n q , f y n f z n + λ n 2 f y n f z n 2 z n q 2 y n z n 2 + μ 2 λ n 2 λ n + 1 2 z n y n 2 2 λ n y n q , f y n

(30) = z n q 2 1 μ 2 λ n 2 λ n + 1 2 y n z n 2 2 λ n y n q , f y n .

Since y n C and q Γ , we obtain y n q , f q 0 . Thus, by the pseudomonotonicity of f , we have y n q , f y n 0 . Hence, it follows from (30) that

(31) u n q 2 z n q 2 1 μ 2 λ n 2 λ n + 1 2 y n z n 2 .

Next, consider the limit

(32) lim n 1 μ 2 λ n 2 λ n + 1 2 = ( 1 μ 2 ) > 0 , ( 0 < μ < 1 ) .

Thus, there exists N 0 > 0 such that, for all n > N 0 , we have 1 μ 2 λ n 2 λ n + 1 2 > 0 . Consequently, for all n > N 0 , from (31), we have

(33) u n q 2 z n q 2 .

By Lemma 2.6(iii) and the nonexpansivity of P C , we have

(34) z n q 2 = P C ( w n τ n A ( I T ) A w n ) q 2 w n τ n A ( I T ) A w n q 2 = w n q 2 + τ n 2 A ( I T ) A w n 2 2 τ n w n q , A ( I T ) A w n .

By Lemma 2.6(iii) and the quasi-nonexpansivity of T , we have

(35) w n q , A ( I T ) A w n = A w n A q , ( I T ) A w n = T A w n A q + ( I T ) A w n , ( I T ) A w n = T A w n A q , ( I T ) A w n + ( I T ) A w n , ( I T ) A w n = T A w n A q , ( I T ) A w n + ( I T ) A w n 2 = 1 2 [ T A w n A q + ( I T ) A w n 2 T A w n A q 2 ( I T ) A w n 2 ] + ( I T ) A w n 2 = 1 2 [ A w n A q 2 T A w n A q 2 + ( I T ) A w n 2 ] 1 2 [ A w n A q 2 A w n A q 2 + ( I T ) A w n 2 ] = 1 2 ( I T ) A w n 2 .

Substituting (35) into (34) and applying the definition of τ n and the condition on ϕ n , we obtain

(36) z n q 2 w n q 2 + τ n 2 A ( I T ) A w n 2 τ n ( I T ) A w n 2 = w n q 2 τ n ( ( I T ) A w n 2 τ n A ( I T ) A w n 2 ) = w n q 2 τ n ( 1 ϕ n ) ( I T ) A w n 2

(37) w n q 2 .

By applying (27), (31), and (36), we have

(38) x n + 1 q 2 = α n g ( w n ) + ( 1 α n ) u n q 2 α n g ( w n ) q 2 + ( 1 α n ) u n q 2 α n g ( w n ) q 2 + ( 1 α n ) z n q 2 ( 1 α n ) 1 μ 2 λ n 2 λ n + 1 2 y n z n 2 α n g ( w n ) q 2 + ( 1 α n ) [ w n q 2 τ n ( 1 ϕ n ) ( I T ) A w n 2 ] ( 1 α n ) 1 μ 2 λ n 2 λ n + 1 2 y n z n 2 α n g ( w n ) q 2 + ( 1 α n ) x n q 2 + 3 M α n θ n α n x n x n 1 τ n ( 1 ϕ n ) ( 1 α n ) ( I T ) A w n 2 ( 1 α n ) 1 μ 2 λ n 2 λ n + 1 2 y n z n 2 .

Furthermore, by (26), (33), and (37), we obtain

x n + 1 q = α n g ( w n ) + ( 1 α n ) u n q α n g ( w n ) q + ( 1 α n ) u n q α n g ( w n ) g ( q ) + α n g ( q ) q + ( 1 α n ) u n q α n ρ w n q + α n g ( q ) q + ( 1 α n ) z n q α n ρ w n q + α n g ( q ) q + ( 1 α n ) w n q α n ρ [ x n q + α n M 1 ] + α n g ( q ) q + ( 1 α n ) [ x n q + α n M 1 ] = [ 1 α n ( 1 ρ ) ] x n q + α n g ( q ) q + α n [ 1 α n ( 1 ρ ) ] M 1 [ 1 α n ( 1 ρ ) ] x n q + α n ( 1 ρ ) g ( q ) q 1 ρ + M 1 1 ρ max x n q , g ( q ) q 1 ρ + M 1 1 ρ max x N 0 q , g ( q ) q 1 ρ + M 1 1 ρ ,

which implies that { x n } is bounded. Consequently, { w n } , { z n } , { y n } , and { u n } are bounded.

Moreover. by (66) and the fact that lim n α n = 0 , we have

lim n x n + 1 u n = lim n α n g ( w n ) u n = 0 .

Lemma 4.3

Let { z n } , { w n } , and { y n } be sequences generated by Algorithm 3.1such that lim n z n w n = 0 = lim n z n y n . If there exists a subsequence { y n k } of { y n } that converges weakly to p H 1 , then p Γ .

Proof

Suppose { y n k } is a subsequence of { y n } such that y n k p . Then, by the hypothesis of the lemma, we have z n k p and w n k p . Since A is a bounded linear operator, we have A w n k A p . From (36) and by the hypothesis of the lemma, we have

τ n k ( 1 ϕ n k ) ( I T ) A w n k 2 w n k q 2 z n k q 2 w n k z n k ( w n k q + z n k q ) 0 , k .

By the definition of τ n , we obtain

ϕ n k ( 1 ϕ n k ) ( I T ) A w n k 4 A ( I T ) A w n k 2 0 , k ,

which implies that

( I T ) A w n k 2 A ( I T ) A w n k 0 , k .

Since A ( I T ) A w n k is bounded, then it follows that

( I T ) A w n k 0 , k .

Since I T is demiclosed at zero and A w n k A p , then by Lemma 2.5(i), we have

(39) A p F ( T ) = F ( S ) .

Since z n k p , lim k z n k y n k = 0 , and { y n } C , we have p C . From y n k = P C ( z n k λ n k f z n k ) , we have z n k λ n k f z n k y n k , x y n k 0 , x C , which implies that

1 λ n k z n k y n k , x y n k f z n k , x y n k , x C ,

which is equivalent to

(40) 1 λ n k z n k y n k , x y n k + f z n k , y n k z n k f z n k , x z n k , x C .

Since the subsequence { z n k } is weakly convergent to z H 1 , then { z n k } is a bounded subsequence. Moreover, since f is uniformly continuous and z n k y n k 0 , we have that { f z n k } and { y n k } are bounded as well. Since lim k λ n k = λ > 0 , then from (40), we have

(41) lim inf k f z n k , x z n k 0 x C .

Furthermore, we have

(42) f y n k , x y n k = f y n k f z n k , x z n k + f z n k , x z n k + f y n k , z n k y n k .

Since z n k y n k 0 , then by the uniform continuity of f , we obtain lim k f z n k f y n k = 0 . This combined with (41) and (42) provides

lim inf n f y n k , x y n k 0 .

Next, let { Φ k } be a decreasing sequence of positive numbers such that Φ k 0 as k . Let N k represent the smallest positive integer for any k such that

(43) f y n j , x y n j + Φ k 0 j N k .

It is clear that the sequence { N k } is increasing since { Φ k } is decreasing. From { y N k } C , for any k , suppose that f y N k 0 (otherwise y N k is a solution) and let

u N k = f y N k f y N k 2 .

Then, f y N k , u N k = 1 for each k . From (43), we have

f y N k , x + Φ k u N k y N k 0 k .

By the pseudomonotonicity of f , we obtain

f ( x + Φ k u N k ) , x + Φ k u N k y N k 0 ,

which is equivalent to

(44) f x , x y N k f x f ( x + Φ k u N k ) , x + Φ k u N k y N k Φ k f x , u N k .

Since z n k p and lim k z n k y n k = 0 , we have y N k p C . We assume that f p 0 (otherwise p is a solution). Since f satisfies condition (C2)(a), we obtain

0 < f p lim inf k f y n k .

Using { y N k } { y n k } and Φ k 0 as k , we have

0 limsup k Φ k u N k = limsup k Φ k f y N k limsup k Φ k liminf k f y n k = 0 ,

which implies that lim k Φ k u N k = 0 . From the facts that f is uniformly continuous, { y N k } and { u N k } are bounded and lim k Φ k u N k = 0 , it follows from (44) that

lim inf k f x , x y N k 0 .

Thus, we have

f x , x p = lim k f x , x y N k = lim inf k f x , x y N k 0 x C .

Thus, by Lemma 2.10, we obtain p VI ( C , f ) . This together with (39) implies that p Γ as required.□

Theorem 4.4

Let { x n } be a sequence generated by Algorithm 3.1under conditions (C1)–(C4). Then, { x n } converges strongly to z = P Γ g ( z ) .

Proof

Let z = P Γ g ( z ) , we consider two cases to prove the theorem.

Case 1: Suppose { x n z } is monotone decreasing. Then, by Lemma 4.2 { x n z } is convergent. Hence,

(45) lim n x n z = lim n x n + 1 z .

By (32) and (45), and lim n α n = 0 , from (38), we obtain

(46) lim n y n z n = 0 .

Next, by the definition of u n and applying the uniform continuity of f , we have

(47) u n y n = λ n f y n f z n 0 as n .

From the definition of w n and by Remark 3.2 (1), we have

(48) w n x n = θ n x n x n 1 0 as n .

From (46) and (47), we have

(49) u n z n u n y n + y n z n 0 as n .

From (38), we have

τ n ( 1 ϕ n ) ( 1 α n ) ( I T ) A w n 2 α n g ( w n ) q 2 + ( 1 α n ) x n q 2 + 3 M α n θ n α n x n x n 1 x n + 1 q 2 .

By applying (45) together with the fact that lim n α n = 0 , we obtain

τ n ( 1 ϕ n ) ( I T ) A w n 2 0 , n .

By the definition of τ n and the condition on ϕ n , we obtain

τ n ( 1 ϕ n ) ( I T ) A w n 4 A ( I T ) A w n 2 0 , n .

Consequently, we have

( I T ) A w n 2 A ( I T ) A w n 0 , n .

Since A ( I T ) A w n is bounded, then it follows that

(50) ( I T ) A w n 0 , n .

From this, we obtain

(51) A ( I T ) A w n A ( I T ) A w n = A ( I T ) A w n 0 , n .

From (34) and (37), we observe that

(52) w n τ n A ( I T ) A w n q 2 w n q 2 .

By Lemma 2.6(iii), (52) together with the firmly nonexpansivity of P C , we have

z n q 2 = P C ( w n τ n A ( I T ) A w n ) q 2 z n q , w n τ n A ( I T ) A w n q = 1 2 ( z n q 2 + w n τ n A ( I T ) A w n q 2 z n w n + τ n A ( I T ) A w n 2 ) 1 2 [ z n q 2 + w n q 2 ( z n w n 2 + τ n 2 A ( I T ) A w n 2 + 2 τ n z n w n , A ( I T ) A w n ) ] .

From which we obtain

(53) z n q 2 w n q 2 z n w n 2 2 τ n z n w n , A ( I T ) A w n w n q 2 z n w n 2 + 2 τ n w n z n A ( I T ) A w n w n q 2 z n w n 2 + 2 M 2 A ( I T ) A w n ,

where M 2 = sup { τ n w n z n : n 1 } . Now, by applying Lemma 2.6 and equations (27), (33), and (53), we have

x n + 1 q 2 α n g ( w n ) q 2 + ( 1 α n ) u n q 2 α n g ( w n ) q 2 + ( 1 α n ) z n q 2 α n g ( w n ) q 2 + ( 1 α n ) [ w n q 2 z n w n 2 + 2 M 2 A ( I T ) A w n ] α n g ( w n ) q 2 + ( 1 α n ) x n q 2 + 3 M α n θ n α n x n x n 1 z n w n 2 + 2 M 2 A ( I T ) A w n = ( 1 α n ) x n q 2 + α n g ( w n ) q 2 + 3 M ( 1 α n ) θ n α n x n x n 1 + 2 M 2 ( 1 α n ) A ( I T ) A w n ( 1 α n ) z n w n 2 ,

which implies that

( 1 α n ) z n w n 2 ( 1 α n ) x n q 2 x n + 1 q 2 + α n [ g ( w n ) q 2 + 3 M ( 1 α n ) θ n α n x n x n 1 ] + 2 M 2 ( 1 α n ) A ( I T ) A w n .

By applying (45), (51) and the fact that lim n α n = 0 , we obtain

(54) lim n z n w n = 0 .

Moreover, from (48) and (54), we obtain

(55) x n z n x n w n + w n z n 0 as n .

From the definition of x n + 1 , we have

x n + 1 z n = α n ( g ( w n z n ) ) + ( 1 α n ) ( u n z n ) α n g ( w n ) z n + ( 1 α n ) u n z n .

By (49) and the condition on α n , we obtain

(56) lim n x n + 1 z n = 0 .

Similarly, from (55) and (56), we have

(57) x n + 1 x n x n + 1 z n + z n x n 0 .

Since { x n } is bounded, there exists a subsequence { x n k } of { x n } such that { x n k } converges weakly to some p H 1 and

(58) limsup n g ( z ) z , x n z = lim k g ( z ) z , x n k z = g ( z ) z , p z .

In addition, by equations (46) and (54) and Lemma 4.3, we obtain that p Γ . Since z = P Γ g ( z ) and by (58), we obtain

limsup n g ( z ) z , x n z 0 ,

which follows from (57) that

(59) limsup n g ( z ) z , x n + 1 z = limsup n ( g ( z ) z , x n + 1 x n + g ( z ) z , x n z ) 0 .

Next, by applying equations (27), (33), and (37) and Lemma 2.6(ii), we obtain

x n + 1 z 2 ( 1 α n ) 2 u n z 2 + 2 α n g ( w n ) z , x n + 1 z = ( 1 α n ) 2 u n z 2 + 2 α n g ( w n ) g ( z ) , x n + 1 z + 2 α n g ( z ) z , x n + 1 z ( 1 α n ) 2 x n z 2 + 3 M α n θ n α n x n x n 1 + 2 α n ρ w n z x n + 1 z + 2 α n g ( z ) z , x n + 1 z ( 1 α n ) 2 x n z 2 + 3 M α n θ n α n x n x n 1 + α n ρ w n z 2 + α n ρ x n + 1 z 2 + 2 α n g ( z ) z , x n + 1 z ( 1 α n ) 2 x n z 2 + 3 M α n θ n α n x n x n 1 + α n ρ x n z 2 + 3 M α n θ n α n x n x n 1 + α n ρ x n + 1 z 2 + 2 α n g ( z ) z , x n + 1 z = ( ( 1 α n ) 2 + α n ρ ) x n z 2 + 3 M α n θ n α n x n x n 1 + α n ρ x n + 1 z 2 + 2 α n g ( z ) z , x n + 1 z .

From which we obtain

(60) x n + 1 z 2 ( 1 2 α n + α n 2 + α n ρ ) 1 α n ρ x n z 2 + 3 M ( ( 1 α n ) 2 + α n ρ ) ( 1 α n ρ ) α n θ n α n x n x n 1 + 2 α n ( 1 α n ρ ) g ( z ) z , x n + 1 z = ( 1 2 α n + α n ρ ) 1 α n ρ x n z 2 + α n 2 ( 1 α n ρ ) x n z 2 + 3 M ( ( 1 α n ) 2 + α n ρ ) ( 1 α n ρ ) α n θ n α n x n x n 1 + 2 α n ( 1 α n ρ ) g ( z ) z , x n + 1 z 1 2 α n ( 1 ρ ) ( 1 α n ρ ) x n z 2 + 2 α n ( 1 ρ ) ( 1 α n ρ ) α n 2 ( 1 ρ ) M 3 + 3 M ( ( 1 α n ) 2 + α n ρ ) 2 ( 1 ρ ) θ n α n x n x n 1 + 1 ( 1 ρ ) g ( z ) z , x n + 1 z ,

where M 3 = sup { x n z 2 : n N } . By Remark 3.2(1), (59), and Lemma 2.4, we obtain that lim n x n z = 0 . Thus, { x n } converges strongly to z = P Γ g ( z ) .

Case 2: Suppose that { x n z } is not monotone decreasing. Then, there exists a subsequence { x n j z } of { x n 2 } such that

x n j z 2 < x n j + 1 z 2 j N .

Then, by Lemma 2.7, there exists a nondecreasing sequence { m k } of N such that as lim k m k = + and the following inequalities hold:

(61) x m k z 2 x m k + 1 z 2 and x k z 2 x m k + 1 z 2 .

By following similar arguments as in Case 1, we obtain

lim k y m k z m k = 0 ,

lim k x m k w m k = 0 ,

lim k x m k x m k + 1 = 0 ,

and

(62) limsup k g ( z ) z , x m k + 1 z 0 .

From (60), we obtain

x m k + 1 z 2 1 2 α m k ( 1 ρ ) 1 α m k ρ x m k z 2 + 2 α m k ( 1 ρ ) 1 α m k ρ α m k 2 ( 1 ρ ) M 3 + 3 M ( ( 1 α m k ) 2 + α m k ρ ) 2 ( 1 ρ ) θ m k α m k x m k x m k 1 + 1 ( 1 ρ ) g ( z ) z , x m k + 1 z ,

which implies that

x m k + 1 z 2 α m k 2 ( 1 ρ ) M 3 + 3 M ( ( 1 α m k ) 2 + α m k ρ ) 2 ( 1 ρ ) θ m k α m k x m k x m k 1 + 1 1 ρ g ( z ) z , x m k + 1 z .

From (61), we have that

x k z 2 x m k + 1 z 2 α m k 2 ( 1 ρ ) M 3 + 3 M ( ( 1 α m k ) 2 + α m k ρ ) 2 ( 1 ρ ) θ m k α m k x m k x m k 1 + 1 1 ρ g ( z ) z , x m k + 1 z

By Remark 3.2, (62), and the fact that lim k α m k = 0 , we have limsup k x k z 0 . Thus, { x k } converges strongly to z = P Γ g ( z ) .□

If we set g ( x ) = v for arbitrary but fixed v H 1 and for all x H 1 in Algorithm 3.1, we obtain the following result as a corollary to Theorem 4.4.

Corollary 4.5

Let v H 1 be a fixed element and let { x n } be a sequence generated by the following algorithm such that conditions (C1)–(C4) of Theorem 4.4hold. Then, x n converges strongly to z = P Γ ( v ) .

Algorithm 4.6

Step 0: Select x 0 , x 1 H , λ 1 > 0 , μ ( 0 , 1 ) , and 0 < ϕ 1 ϕ n ϕ 2 < 1 and set n = 1 .

Step 1: Given the ( n 1 )th and n th iterates, choose θ n such that 0 θ n θ ˜ n with θ ˜ n defined as follows:

(63) θ ˜ n = min θ , ε n x n x n 1 , if x n x n 1 , θ , otherwise.

Step 2: Compute

w n = x n + θ n ( x n x n 1 ) .

Step 3: Compute

(64) z n = P C ( w n τ n A ( I T ) A w n ) ,

where

T = ( 1 η ) I + η S ( ( 1 μ ) I + μ S ) .

Step 4: Compute

(65) y n = P C ( z n λ n f z n ) .

Step 5: Compute

u n = y n λ n ( f y n f z n ) ,

(66) x n + 1 = α n v + ( 1 α n ) u n ,

where

(67) λ n + 1 = min μ z n y n f z n f y n , λ n + ψ n , f z n f y n 0 , λ n + ψ n , otherwise

and

τ n ϕ n ( I T ) A w n 2 A ( I T ) A w n 2 , if A w n T A w n , τ , otherwise ( τ being any nonnegative real number ) .

Set n n + 1 and go to Step 1.

5 Numerical examples

In this section, we present some numerical examples and compare our proposed method with some of the existing methods in the literature. We compare our proposed Algorithm 3.1 (Proposed Alg.) with Appendix A.1 (Tian and Jiang Alg.), Appendix A.2 (Pham et al. Alg.), Appendix A.3 (He et al.), and Algorithm 1.6. In our computations, we choose α n = 1 2 n + 1 , ε n = 1 ( 2 n + 1 ) 2 , g ( x ) = x 2 , μ = 0.95 , λ = 1.5 , ϕ n = 2 n 3 n + 1 , ψ n = 100 ( 2 n + 1 ) 3 , θ = 0.95 , η = 0.22 , ξ = 0.27 , τ n = τ = 0.01 , λ n = 0.5 , and S x = x 2 .

As for Algorithm of He et al. [23], we set μ 1 = 5 ( T H T + L 1 ) / v , μ 2 = 10 ( T H T + L 2 ) / v , v = 0.8 , ρ = 2.5 , λ n H = 2 T T I N , and γ = 1.5 .

Example 5.1

In this example, we consider Example 5.2 of He et al. [23]. Let

(68) min { G 1 ( x ) + G 2 ( y ) T x = y , x κ , y г }

be a separable, convex, and quadratic programming problem, where

G 1 ( x ) = 1 2 x M 1 x + q 1 x ( x means the transpose of x )

and

G 2 ( y ) = 1 2 y M 2 y + q 2 y

The problem (68) can be rewritten as SVIP (equations (8) and (9)), where

(69) f ( x ) = M x + q 1 and g ( y ) = M 2 ( y ) + q 2 .

The matrices M 1 and M 2 are defined as M i = V i i V i , where V i = I 2 v i v v i 2 and i = diag ( σ i 1 , σ i 2 , , σ i N i ) are the householder and the diagonal matrix, respectively, with N i = N and N 2 = m being the dimensional of x and y , respectively. Moreover, let σ i , j be defined as follows:

σ i , j = cos j π N i + 1 + 1 + cos π N i + 1 + 1 C ˆ i cos N i π N i + 1 + 1 C ˆ i 1 , j = 1 , 2 , , N i ,

where C ˆ i is the present condition of M i . We set C ˆ i = 1 0 4 and q i = 0 , i = 1 , 2 , and uniformly take the vector v i R N i ( i = 1 , 2 ) in ( 1 , 1 ). Hence, f and g are monotone and Lipschitz continuous operators with L i = M i , i = 1 , 2 . Moreover, the bounded linear operator T R M × N is generated with independent Gaussian components distributed in the interval ( 0 , 1 ) , and then each column of T is normalized with the unit norm. Let κ = { x R N : x 1 } and г = { y R m : ł y u } , where the smallest and largest components of y ˜ = T x ˜ , where x ˜ is the sparse vector whose components are evenly distributed in ( 0 , 1 ) , are the entries of ł R m and u R m , respectively. More so, we consider different scenarios of the problem’s dimensionality with n = 100 , 300 , 500 , 700 and m = N 2 . Let the starting point for Algorithm 3.1 be x 1 = ( 1 , 1 , , 1 ) , while the entries of x 0 are randomly generated in [ 0 , 1 ] . For Algorithm of He et al. [23], we set μ 1 = 5 ( T H T + L 1 ) / v , μ 2 = 10 ( T H T + L 2 ) / v , v = 0.8 , ρ = 2.5 , λ n H = 2 T T I N , and γ = 1.5 , with starting points x 1 = ( 1 , 1 , 1 , , 1 ) , y 1 = ( 0 , 0 , , 0 ) and λ 1 = ( 0 , 0 , , 0 ) . The stopping criterion used for our computation is x n + 1 x n < 1 0 3 . We plot the graphs of errors against the number of iterations in each case. The numerical results are reported in Figures 1, 2, 3, 4 and Table 1.

Figure 1 
               Example 5.1: (
                     
                        
                        
                           N
                           =
                           5
                           ,
                           m
                           =
                           5
                        
                        N=5,m=5
                     
                  ).
Figure 1

Example 5.1: ( N = 5 , m = 5 ).

Figure 2 
               Example 5.1: (
                     
                        
                        
                           N
                           =
                           10
                           ,
                           m
                           =
                           5
                        
                        N=10,m=5
                     
                  ).
Figure 2

Example 5.1: ( N = 10 , m = 5 ).

Figure 3 
               Example 5.1: (
                     
                        
                        
                           N
                           =
                           10
                           ,
                           m
                           =
                           10
                        
                        N=10,m=10
                     
                  ).
Figure 3

Example 5.1: ( N = 10 , m = 10 ).

Figure 4 
               Example 5.1: (
                     
                        
                        
                           N
                           =
                           5
                           ,
                           m
                           =
                           10
                        
                        N=5,m=10
                     
                  ).
Figure 4

Example 5.1: ( N = 5 , m = 10 ).

Table 1

Numerical results for Example 5.1

( N = 5 , m = 5 ) ( N = 10 , m = 5 ) ( N = 10 , m = 10 ) ( N = 10 , m = 10 )
Iter. CPU time Iter. CPU time Iter. CPU time Iter. CPU time
Ogwo et al. Algorithm 1.6 35 0.0053 35 0.0061 35 0.0050 35 0.0.0197
Tian and Jiang App. A.1 21 0.0042 21 0.0043 21 0.0040 22 0.0148
He et al. App. A.3 18 0.0146 33 0.0157 16 0.0145 19 0.0258
Proposed Algorithm 3.1 8 0.0143 12 0.0153 13 0.0233 9 0.0236

Iter. – number of iterations; CPU – central processing unit.

Example 5.2

Let H 1 = ( 2 ( R ) , 2 ) = H 2 , where 2 ( R ) { x = ( x 1 , x 2 , , x i , ) , x i R : i = 1 x i 2 < + } , with inner product x , y i = 1 x i y i and norm x i = 1 x i 2 1 2 , x , y 2 ( R ) . Now, let the operator f 1 , f , h : 2 ( R ) 2 ( R ) be defined by f 1 x = f x = h x = ( 1 x + s + x ) x , ( s > 0 ) , x ł 2 ( R ) . Then, f 1 , f , and h are uniformly continuous and pseudomonotone. Let A : 2 ( R ) 2 ( R ) be defined by A x = ( 0 , x 1 2 , x 2 3 , x 3 3 , ) for all x 2 . Then, A is a bounded linear operator on 2 with adjoint A y = ( y 2 , y 3 2 , y 4 3 , ) for all y 2 ( R ) . Let C = { x 2 : x y 2 a } , where y = ( 1 , 1 2 , 1 3 , ) and a = 3 . So, C is a nonempty, closed, convex subset of 2 . Hence,

P C ( x ) = x , if x y ł 2 a , x y x y ł 2 a + y , otherwise .

Now, we consider the following cases for the starting points:

Case I: x 0 = ( 3 , 1 , 1 3 , ) and x 1 = ( 2 , 1 , 1 2 , ) ,

Case II: x 0 = ( 4 , 1 , 1 4 , ) and x 1 = ( 2 , 1 , 1 2 , ) ,

Case III: x 0 = ( 4 , 1 , 1 4 , ) and x 1 = ( 3 , 1 , 1 3 , ) ,

Case IV: x 0 = ( 5 , 1 , 1 5 , ) and x 1 = ( 3 , 1 , 1 3 , ) . The stopping criterion used for our computation is x n + 1 x n < 1 0 2 . We plot the graphs of errors against the number of iterations in each case. The numerical results are reported in Figures 5, 6, 7, 8 and Table 2.

Figure 5 
               Example 5.2: Case I.
Figure 5

Example 5.2: Case I.

Figure 6 
               Example 5.2: Case II.
Figure 6

Example 5.2: Case II.

Figure 7 
               Example 5.2: Case III.
Figure 7

Example 5.2: Case III.

Figure 8 
               Example 5.2: Case IV.
Figure 8

Example 5.2: Case IV.

Table 2

Numerical results for Example 5.2

Case I Case II Case III Case IV
Iter. CPU time Iter. CPU time Iter. CPU time Iter. CPU time
Ogwo et al. Algorithm 1.6 34 0.0094 24 0.0073 35 0.0080 34 0.0078
Tian and Jiang App. A.1 41 0.0144 38 0.0149 43 0.0179 44 0.0137
Pham et al. App. A.2 82 0.0148 69 0.0125 93 0.0151 92 0.0144
Proposed Algorithm 3.1 21 0.0138 21 0.0106 24 0.0116 25 0.0121

Iter. – number of iterations; CPU – central processing unit.

6 Conclusion

In this article, we introduced and studied an inertial iterative method for approximating the solution of a class of pseudomonotone SVIP in the framework of Hilbert spaces. We established that the sequence generated by our method converges strongly to a solution of the SVIP when the cost operator is uniformly continuous and without prior knowledge of the operator norm. We gave some numerical examples to illustrate the efficacy and advantage of our method and compare it with related methods in the literature.

Acknowledgments

Abd-Semii Oluwatosin-Enitan Owolabi acknowledges with thanks the International Mathematical Union (IMU) Breakout Graduate Fellowship Award for his doctoral study. The authors sincerely thank the reviewer for his careful reading, constructive comments, and fruitful suggestions that improved the manuscript. The research of the Timilehin Opeyemi Alakoya is wholly supported by the University of KwaZulu-Natal, Durban, South Africa Postdoctoral Fellowship. He is grateful for the funding and financial support. Oluwatosin Temitope Mewomo is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903). Opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the NRF.

  1. Funding information: The first author is funded by the IMU Breakout Graduate Fellowship Award for his doctoral study. The second author is supported by the University of KwaZulu-Natal Postdoctoral Fellowship. The research of the third author is supported by the NRF of South Africa (Grant Number 119903).

  2. Author contributions: Conceptualization of the article was given by A.-S.O.-E.O. and O.T.M.; methodology by A.-S.O.-E.O. and T.O.A.; formal analysis, investigation, and writing-original draft preparation by A.-S.O.-E.O. and T.O.A.; software and validation by O.T.M.; writing-review and editing by A.-S.O.-E.O., T.O.A., and O.T.M.; and project administration by O.T.M. All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: The authors declare that they have no competing interests.

Appendix

A.1 Appendix

The Algorithm in Tian and Jiang [40]. Let x 1 C and let { x n } , y n , and { v n } be sequences defined as follows:

(A1) y n = P C ( x n τ n A ( I S ) A x n ) , v n = P C ( y n λ n f ( y n ) ) , u n = P C ( y n λ n f ( v n ) ) , x n + 1 = α n g ( x n ) + ( 1 α n ) u n , n 1 ,

where lim n α n = 0 , { α n } ( 0 , 1 ) , and n = 1 = , { τ n } [ a , b ] , for some a , b 0 , 1 A 2 , { λ n } [ c , d ] for some c , d 0 , 1 L , S : H 2 H 2 is a nonexpansive mapping, f : C H 1 is a monotone and L Lipschitz continuous operator, and g is a contraction on C .

A.2 Appendix

Algorithm of Pham et al. [41]

Step 0. Choose μ 0 , λ 0 > 0 , μ , λ ( 0 , 1 ) , { τ n } [ τ ̲ , τ ¯ ] 0 , 1 A 2 + 1 , { α n } ( 0 , 1 ) such that lim n α n = 0 and n = 1 α n = + .

Step 1. Let x 0 H 1 . Set n 0 .

Step 2. Compute

u n = A x n , v n = P Q ( u n μ n h u n ) , w n = P Q n ( u n μ n h v n ) ,

where

Q n = { y H 2 : u n μ n h u n v n , y v n 0 }

and

μ n + 1 = min μ u n v n h u n h v n , μ n , if h u n h v n , μ n , otherwise .

Step 3. Compute

y n = x n + τ n A ( w n u n ) , z n = P C ( y n λ n f y n ) , t n = P C n ( y n λ n f z n ) ,

where

C n = { x H 1 : y n λ n f y n z n , x z n 0 }

and

λ n + 1 = min λ y n z n f y n f z n , λ n , if f y n f z n , λ n , otherwise .

Step 4. Compute

x n + 1 = α n x 0 + ( 1 α n ) t n

Set n n + 1 and go back to Step 2.

C and Q are nonempty, closed and convex subsets of H 1 and H 2 , respectively, and f : H 1 H 1 and h : H 2 H 2 are pseudomonotone and Lipschitz continuous operators.

A.3 Appendix

Algorithm of He et al. [23]

Step 0. Given a symmetric positive definite matrix H R m × m , γ ( 0 , 2 ) , and ρ ( ρ min , 3 ) , where ρ min max 3 , 2 ( v 1 ) μ 1 4 ζ max ( T H T ) , and T : R N R m is a linear operator, where T means the transpose of T . Set an initial point u 1 ( x 1 , y 1 , λ 1 ) Ω : κ × г × R m , where κ and г are nonempty, closed, and convex subsets of R N and R m , respectively.

Step 1. Generate a predictor u ˜ n ( x ˜ n , y ˜ n , λ ˜ n ) with appropriate parameters u 1 and u 2 :

(A2) λ ¯ n = λ n H ( T x n y n ) , x ˜ n = P κ x n 1 μ 1 ( A ( x n ) T λ ˜ n ) , λ ˆ n = λ H ( T x ˆ n y n ) , where x ˆ ρ x n + ( 1 ρ ) x ˜ n , y ˜ n = P г y n 1 μ 2 ( F ( y n ) + λ ˜ n ) , λ ˜ n = λ H ( T x ˜ n y ˜ n ) .

Step 2. Update the next iterative u n + 1 ( x n + 1 , y n + 1 , λ n + 1 ) via u n + 1 = u n γ α k d ( u n , u ˜ n ) , where

α k ψ ( u n , u ˜ n ) d ( u n , u ˜ n ) 2 , d ( u n , u ˜ n ) G ( u n u ˜ n ) ξ n , ψ ( u n , u ˜ n ) λ n λ ˜ n , ρ T ( x n x ˜ n ) ( y n y ˜ n ) + u n u ˜ n , d ( u n , u ˜ n ) ,

ξ n = ξ n x ξ n y 0 = A ( x n ) A ( x ˜ n ) + T H T ( x n x ˜ n ) F ( y n ) F ( y ˜ n ) + H ( y n y ˜ n ) 0 ,

where A and F are monotone and Lipschitz continuous with constants L 1 and L 2 , respectively, and

(A3) G μ n I N + ρ T H T 0 0 0 μ 2 I m + H 0 0 0 H 1

is the block diagonal matrix, with identity matrices I N and I m of size N and m , respectively. The parameters μ 1 and μ 2 are chosen such that

ξ n x v μ 1 x n x ˜ n and ξ n y v μ 1 y k y ˜ n , where v ( 0 , 1 ) .

References

[1] G. Stampacchia, Formes bilineaires coercitives sur les ensembles convexes, C. R. Acad. Sci. Paris 258 (1964), 4413. Search in Google Scholar

[2] G. Fichera, Sul problema elastostatico di signorini con ambigue condizioni al contorno, Atti Accad. Naz. Lincei VIII, Ser. Rend. Cl. Sci. Fis. Mat. Nat. 34 (1963), 138–142. Search in Google Scholar

[3] A. Gibali, S. Reich, and R. Zalas, Outer approximation methods for solving variational inequalities in Hilbert spaces, Optimization 66 (2017), no. 3, 417–437, DOI: https://doi.org/10.1080/02331934.2016.1271800. 10.1080/02331934.2016.1271800Search in Google Scholar

[4] G. Kassay, S. Reich, and S. Sabach, Iterative methods for solving systems of variational inequalities in reflexive Banach spaces, SIAM J. Optim. 21 (2011), no. 4, 1319–1344, DOI: http://doi.org/10.1137/11082000210.1137/110820002Search in Google Scholar

[5] T. O. Alakoya and O. T. Mewomo, Viscosity s-iteration method with inertial technique and self-adaptive step size for split variational inclusion, equilibrium and fixed point problems, Comput. Appl. Math. 41 (2022), 39, DOI: https://doi.org/10.1007/s40314-021-01749-3. 10.1007/s40314-021-01749-3Search in Google Scholar

[6] T. O. Alakoya, V. A. Uzor, and O. T. Mewomo, A new projection and contraction method for solving split monotone variational inclusion, pseudomonotone variational inequality, and common fixed point problems, Comput. Appl. Math. 42 (2023), 3, DOI: https://doi.org/10.1007/s40314-022-02138-0. 10.1007/s40314-022-02138-0Search in Google Scholar

[7] Y. Censor, A. Gibali, and S. Reich, Algorithms for the split variational inequality problem, Numer. Algorithms 59 (2012), 301–323, DOI: https://doi.org/10.1007/s11075-011-9490-5. 10.1007/s11075-011-9490-5Search in Google Scholar

[8] Y. Censor, A. Gibali, and S. Reich, Extensions of Korpelevich’s extragradient methods for the variational inequality problem in Euclidean space, Optimization 61 (2012), no. 9, 1119–1132, DOI: https://doi.org/10.1080/02331934.2010.539689. 10.1080/02331934.2010.539689Search in Google Scholar

[9] Y. Censor, A. Gibali, and S. Reich, Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space, Optim. Methods Softw. 26 (2011), no. 4–5, 827–845, DOI: https://doi.org/10.1080/10556788.2010.551536. 10.1080/10556788.2010.551536Search in Google Scholar

[10] E. C. Godwin, T. O. Alakoya, O. T. Mewomo, and J.-C. Yao, Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems, Appl. Anal. (2022), DOI: https://doi.org/10.1080/00036811.2022.2107913. 10.1080/00036811.2022.2107913Search in Google Scholar

[11] V. A. Uzor, T. O. Alakoya, and O. T. Mewomo, Strong convergence of a self-adaptive inertial Tseng’s extragradient method for pseudomonotone variational inequalities and fixed point problems, Open Math. 20 (2022), 234–257, DOI: https://doi.org/10.1515/math-2022-0030. 10.1515/math-2022-0030Search in Google Scholar

[12] G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Ekon. Mat. Metody 12 (1976), no. 4, 747–756. Search in Google Scholar

[13] P. E. Maingé, A hybrid extragradient-viscosity method for monotone operators and fixed point problems, SIAM J. Control Optim. 47 (2008), no. 3, 1499–1515, DOI: https://doi.org/10.1137/060675319. 10.1137/060675319Search in Google Scholar

[14] Y. Malitsky, Projected reflected gradient methods for monotone variational inequalities, SIAM J. Optim. 25 (2015), no. 1, 502–520, DOI: https://doi.org/10.1137/14097238X. 10.1137/14097238XSearch in Google Scholar

[15] D. V. Thong, Viscosity approximation method for solving fixed point problems and split common fixed point problems, J. Fixed Point Theory Appl. 19 (2017), 1481–1499, DOI: https://doi.org/10.1007/s11784-016-0323-y. 10.1007/s11784-016-0323-ySearch in Google Scholar

[16] P. Tseng, A modified forward-backward splitting method for maximal method for maximal monotone mappings, SIAM J. Control Optim. 38 (2000), no. 2, 431–446, DOI: https://doi.org/10.1137/S0363012998338806. 10.1137/S0363012998338806Search in Google Scholar

[17] P. E. Maingé and M. L. Gbindass, Convergence of one-step projected gradient methods for variational inequalities, J. Optim. Theory Appl. 171 (2016), 146–168, DOI: https://doi.org/10.1007/s10957-016-0972-4. 10.1007/s10957-016-0972-4Search in Google Scholar

[18] D. V. Thong and D. V. Hieu, Some extragradient-viscosity algorithms for solving variational inequality problems and fixed point problems, Numer. Algorithms 82 (2019), 761–789, DOI: https://doi.org/10.1007/s11075-018-0626-8. 10.1007/s11075-018-0626-8Search in Google Scholar

[19] T. O. Alakoya, V. A. Uzor, O. T. Mewomo, and J.-C. Yao, On system of monotone variational inclusion problems with fixed-point constraint, J. Inequal. Appl. 2022 (2022), 47, DOI: https://doi.org/10.1186/s13660-022-02782-4. 10.1186/s13660-022-02782-4Search in Google Scholar

[20] J. K. Kim, S. Salahuddin, and W. H Lim, General nonconvex split variational inequality problems, Korean J. Math. 25 (2017), no. 4, 469–481, DOI: https://doi.org/10.11568/kjm.2017.25.4.469. Search in Google Scholar

[21] A. Moudafi, Split monotone variational inclusions, J. Optim. Theory Appl. 150 (2011), 275–283, DOI: https://doi.org/10.1007/s10957-011-9814-6. 10.1007/s10957-011-9814-6Search in Google Scholar

[22] O. T. Mewomo and F. U. Ogbuisi, Convergence analysis of an iterative method for solving multiple-set split feasibility problems in certain Banach spaces, Quaest. Math. 41 (2018), 129–148, DOI: https://doi.org/10.2989/16073606.2017.1375569. 10.2989/16073606.2017.1375569Search in Google Scholar

[23] H. He, C. Ling, and H. K. Xu, A relaxed projection method for split variational inequalities, J. Optim. Theory Appl. 166 (2015), no. 1, 213–233, DOI: https://doi.org/10.1007/s10957-014-0598-3. 10.1007/s10957-014-0598-3Search in Google Scholar

[24] M. Tian and B. N. Jiang, Weak convergence theorem for a class of split variational inequality problems and applications in Hilbert space, J. Inequal. Appl. 2017 (2017), 123, DOI: https://doi.org/10.1186/s13660-017-1397-9. 10.1186/s13660-017-1397-9Search in Google Scholar PubMed PubMed Central

[25] G. N. Ogwo, C. Izuchukwu, and O. T. Mewomo, A modified extragradient algorithm for a certain class of split pseudomonotone variational inequality problem, Numer. Algebra Control Optim. 12 (2022), no. 2, 373–393, DOI: https://doi.org/10.3934/naco.2021011. 10.3934/naco.2021011Search in Google Scholar

[26] G. N. Ogwo, C. Izuchukwu, and O. T. Mewomo, Relaxed inertial methods for solving split variational inequality problems without product space formulation, Acta Math. Sci. Ser. B (Engl. Ed.) 42 (2022), 1701–1733, DOI: https://doi.org/10.1007/s10473-022-0501-5.10.1007/s10473-022-0501-5Search in Google Scholar

[27] V. A. Uzor, T. O. Alakoya, and O. T. Mewomo, On split monotone variational inclusion problem with multiple output sets with fixed point constraints, Comput. Methods Appl. Math. (2023), DOI: https://doi.org/10.1515/cmam-2022-0199. 10.1515/cmam-2022-0199Search in Google Scholar

[28] E. C. Godwin, C. Izuchukwu, and O. T. Mewomo, Image restorations using a modified relaxed inertial technique for generalized split feasibility problems, Math. Methods Appl. Sci. 46 (2022), no. 5, 5521–5544, DOI: https://doi.org/10.1002/mma.8849. 10.1002/mma.8849Search in Google Scholar

[29] J. B. Baillon, R. E. Bruck, and S. Reich, On the asymptotic behaviour of nonexpansive mappings and semigroups in Banach spaces, Houston J. Math. 4 (1978), 1–9. Search in Google Scholar

[30] K. Goebel and S. Reich, Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings, Marcel Dekker, New York, 1984. Search in Google Scholar

[31] R. E. Bruck and S. Reich, Nonexpansive projections and resolvents of accretive operators in Banach spaces, Houston J. Math. 3 (1977), 459–470. Search in Google Scholar

[32] R. I. Bot, E. R. Csetnek, and P. T. Vuong, The forward-backward-forward method from continuous and discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces, European J. Oper. Res. 287 (2020), 49–60, DOI: https://doi.org/10.1016/j.ejor.2020.04.035. 10.1016/j.ejor.2020.04.035Search in Google Scholar

[33] P. D. Khanh and P. T. Vuong, Modified projection method for strongly pseudo-monotone variational inequalities, J. Global Optim. 58 (2014), 341–350, https://fixedpointtheoryandalgorithms.springeropen.com/articles/10.1186/s13663-015-0458-3. 10.1007/s10898-013-0042-5Search in Google Scholar

[34] S.-S. Chang, L. Wang, and L. J. Qin, Split equality fixed point problem for quasi-pseudo-contractive mappings with applications, Fixed Point Theory Appl. 2015 (2015), 208, DOI: https://doi.org/10.1186/s13663-015-0458-3. 10.1186/s13663-015-0458-3Search in Google Scholar

[35] A. O.-E. Owolabi, T. O. Alakoya, A. Taiwo, and O. T. Mewomo, A new inertial-projection algorithm for approximating common solution of variational inequality and fixed point problems of multivalued mappings, Numer. Algebra Control Optim. 12 (2022), no. 2, 255–278, DOI: https://doi.org/10.3934/naco.2021004. 10.3934/naco.2021004Search in Google Scholar

[36] A. Taiwo, T. O. Alakoya, and O. T. Mewomo, Strong convergence theorem for solving equilibrium problem and fixed point of relatively nonexpansive multi-valued mappings in a Banach space with applications, Asian-Eur. J. Math. 14 (2021), no. 8, 2150137, DOI: https://doi.org/10.1142/S1793557121501370. 10.1142/S1793557121501370Search in Google Scholar

[37] K. K. Tan and H. K. Xu, Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process, J. Math. Anal. Appl. 178 (1993), no. 2, 301–308, DOI: https://doi.org/10.1006/jmaa.1993.1309. 10.1006/jmaa.1993.1309Search in Google Scholar

[38] H. K. Xu, Viscosity approximation methods for nonexpansive mappings, J. Math. Anal. Appl. 298 (2004), 279–291, DOI: https://doi.org/10.1016/j.jmaa.2004.04.059. 10.1016/j.jmaa.2004.04.059Search in Google Scholar

[39] R. W. Cottle and J. C. Yao, Pseudo-monotone complementarity problems in Hilbert space, J. Optim. Theory Appl. 75 (1992), 281–295, DOI: https://doi.org/10.1007/BF00941468. 10.1007/BF00941468Search in Google Scholar

[40] M. Tian and B. N. Jiang, Viscosity approximation methods for a class of generalized split feasibility problems with variational inequalities in Hilbert space, Numer. Funct. Anal. Optim. 40 (2019), no. 8, 902–923, DOI: https://doi.org/10.1080/01630563.2018.1564763. 10.1080/01630563.2018.1564763Search in Google Scholar

[41] P. Van Huy, N. D. Hien, and T. V. Anh, A strongly convergent modified Halpern subgradient extragradient method for solving the split variational inequality problem, Vietnam J. Math. 48 (2020), 187–204, DOI: https://doi.org/10.1007/s10013-019-00378-y. 10.1007/s10013-019-00378-ySearch in Google Scholar

Received: 2022-09-30
Revised: 2023-02-27
Accepted: 2023-03-05
Published Online: 2023-04-27

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Special Issue on Future Directions of Further Developments in Mathematics
  2. What will the mathematics of tomorrow look like?
  3. On H 2-solutions for a Camassa-Holm type equation
  4. Classical solutions to Cauchy problems for parabolic–elliptic systems of Keller-Segel type
  5. Control of multi-agent systems: Results, open problems, and applications
  6. Logical perspectives on the foundations of probability
  7. Subharmonic solutions for a class of predator-prey models with degenerate weights in periodic environments
  8. A non-smooth Brezis-Oswald uniqueness result
  9. Luenberger compensator theory for heat-Kelvin-Voigt-damped-structure interaction models with interface/boundary feedback controls
  10. Special Issue on Fractional Problems with Variable-Order or Variable Exponents (Part II)
  11. Positive solution for a nonlocal problem with strong singular nonlinearity
  12. Analysis of solutions for the fractional differential equation with Hadamard-type
  13. Hilfer proportional nonlocal fractional integro-multipoint boundary value problems
  14. A comprehensive review on fractional-order optimal control problem and its solution
  15. The θ-derivative as unifying framework of a class of derivatives
  16. Review Articles
  17. On the use of L-functionals in regression models
  18. Minimal-time problems for linear control systems on homogeneous spaces of low-dimensional solvable nonnilpotent Lie groups
  19. Regular Articles
  20. Existence and multiplicity of solutions for a new p(x)-Kirchhoff problem with variable exponents
  21. An extension of the Hermite-Hadamard inequality for a power of a convex function
  22. Existence and multiplicity of solutions for a fourth-order differential system with instantaneous and non-instantaneous impulses
  23. Relay fusion frames in Banach spaces
  24. Refined ratio monotonicity of the coordinator polynomials of the root lattice of type Bn
  25. On the uniqueness of limit cycles for generalized Liénard systems
  26. A derivative-Hilbert operator acting on Dirichlet spaces
  27. Scheduling equal-length jobs with arbitrary sizes on uniform parallel batch machines
  28. Solutions to a modified gauged Schrödinger equation with Choquard type nonlinearity
  29. A symbolic approach to multiple Hurwitz zeta values at non-positive integers
  30. Some results on the value distribution of differential polynomials
  31. Lucas non-Wieferich primes in arithmetic progressions and the abc conjecture
  32. Scattering properties of Sturm-Liouville equations with sign-alternating weight and transmission condition at turning point
  33. Some results for a p(x)-Kirchhoff type variation-inequality problems in non-divergence form
  34. Homotopy cartesian squares in extriangulated categories
  35. A unified perspective on some autocorrelation measures in different fields: A note
  36. Total Roman domination on the digraphs
  37. Well-posedness for bilevel vector equilibrium problems with variable domination structures
  38. Binet's second formula, Hermite's generalization, and two related identities
  39. Non-solid cone b-metric spaces over Banach algebras and fixed point results of contractions with vector-valued coefficients
  40. Multidimensional sampling-Kantorovich operators in BV-spaces
  41. A self-adaptive inertial extragradient method for a class of split pseudomonotone variational inequality problems
  42. Convergence properties for coordinatewise asymptotically negatively associated random vectors in Hilbert space
  43. Relating the super domination and 2-domination numbers in cactus graphs
  44. Compatibility of the method of brackets with classical integration rules
  45. On the inverse Collatz-Sinogowitz irregularity problem
  46. Positive solutions for boundary value problems of a class of second-order differential equation system
  47. Global analysis and control for a vector-borne epidemic model with multi-edge infection on complex networks
  48. Nonexistence of global solutions to Klein-Gordon equations with variable coefficients power-type nonlinearities
  49. On 2r-ideals in commutative rings with zero-divisors
  50. A comparison of some confidence intervals for a binomial proportion based on a shrinkage estimator
  51. The construction of nuclei for normal constituents of Bπ-characters
  52. Weak solution of non-Newtonian polytropic variational inequality in fresh agricultural product supply chain problem
  53. Mean square exponential stability of stochastic function differential equations in the G-framework
  54. Commutators of Hardy-Littlewood operators on p-adic function spaces with variable exponents
  55. Solitons for the coupled matrix nonlinear Schrödinger-type equations and the related Schrödinger flow
  56. The dual index and dual core generalized inverse
  57. Study on Birkhoff orthogonality and symmetry of matrix operators
  58. Uniqueness theorems of the Hahn difference operator of entire function with a Picard exceptional value
  59. Estimates for certain class of rough generalized Marcinkiewicz functions along submanifolds
  60. On semigroups of transformations that preserve a double direction equivalence
  61. Positive solutions for discrete Minkowski curvature systems of the Lane-Emden type
  62. A multigrid discretization scheme based on the shifted inverse iteration for the Steklov eigenvalue problem in inverse scattering
  63. Existence and nonexistence of solutions for elliptic problems with multiple critical exponents
  64. Interpolation inequalities in generalized Orlicz-Sobolev spaces and applications
  65. General Randić indices of a graph and its line graph
  66. On functional reproducing kernels
  67. On the Waring-Goldbach problem for two squares and four cubes
  68. Singular moduli of rth Roots of modular functions
  69. Classification of self-adjoint domains of odd-order differential operators with matrix theory
  70. On the convergence, stability and data dependence results of the JK iteration process in Banach spaces
  71. Hardy spaces associated with some anisotropic mixed-norm Herz spaces and their applications
  72. Remarks on hyponormal Toeplitz operators with nonharmonic symbols
  73. Complete decomposition of the generalized quaternion groups
  74. Injective and coherent endomorphism rings relative to some matrices
  75. Finite spectrum of fourth-order boundary value problems with boundary and transmission conditions dependent on the spectral parameter
  76. Continued fractions related to a group of linear fractional transformations
  77. Multiplicity of solutions for a class of critical Schrödinger-Poisson systems on the Heisenberg group
  78. Approximate controllability for a stochastic elastic system with structural damping and infinite delay
  79. On extremal cacti with respect to the first degree-based entropy
  80. Compression with wildcards: All exact or all minimal hitting sets
  81. Existence and multiplicity of solutions for a class of p-Kirchhoff-type equation RN
  82. Geometric classifications of k-almost Ricci solitons admitting paracontact metrices
  83. Positive periodic solutions for discrete time-delay hematopoiesis model with impulses
  84. On Hermite-Hadamard-type inequalities for systems of partial differential inequalities in the plane
  85. Existence of solutions for semilinear retarded equations with non-instantaneous impulses, non-local conditions, and infinite delay
  86. On the quadratic residues and their distribution properties
  87. On average theta functions of certain quadratic forms as sums of Eisenstein series
  88. Connected component of positive solutions for one-dimensional p-Laplacian problem with a singular weight
  89. Some identities of degenerate harmonic and degenerate hyperharmonic numbers arising from umbral calculus
  90. Mean ergodic theorems for a sequence of nonexpansive mappings in complete CAT(0) spaces and its applications
  91. On some spaces via topological ideals
  92. Linear maps preserving equivalence or asymptotic equivalence on Banach space
  93. Well-posedness and stability analysis for Timoshenko beam system with Coleman-Gurtin's and Gurtin-Pipkin's thermal laws
  94. On a class of stochastic differential equations driven by the generalized stochastic mixed variational inequalities
  95. Entire solutions of two certain Fermat-type ordinary differential equations
  96. Generalized Lie n-derivations on arbitrary triangular algebras
  97. Markov decision processes approximation with coupled dynamics via Markov deterministic control systems
  98. Notes on pseudodifferential operators commutators and Lipschitz functions
  99. On Graham partitions twisted by the Legendre symbol
  100. Strong limit of processes constructed from a renewal process
  101. Construction of analytical solutions to systems of two stochastic differential equations
  102. Two-distance vertex-distinguishing index of sparse graphs
  103. Regularity and abundance on semigroups of partial transformations with invariant set
  104. Liouville theorems for Kirchhoff-type parabolic equations and system on the Heisenberg group
  105. Spin(8,C)-Higgs pairs over a compact Riemann surface
  106. Properties of locally semi-compact Ir-topological groups
  107. Transcendental entire solutions of several complex product-type nonlinear partial differential equations in ℂ2
  108. Ordering stability of Nash equilibria for a class of differential games
  109. A new reverse half-discrete Hilbert-type inequality with one partial sum involving one derivative function of higher order
  110. About a dubious proof of a correct result about closed Newton Cotes error formulas
  111. Ricci ϕ-invariance on almost cosymplectic three-manifolds
  112. Schur-power convexity of integral mean for convex functions on the coordinates
  113. A characterization of a ∼ admissible congruence on a weakly type B semigroup
  114. On Bohr's inequality for special subclasses of stable starlike harmonic mappings
  115. Properties of meromorphic solutions of first-order differential-difference equations
  116. A double-phase eigenvalue problem with large exponents
  117. On the number of perfect matchings in random polygonal chains
  118. Evolutoids and pedaloids of frontals on timelike surfaces
  119. A series expansion of a logarithmic expression and a decreasing property of the ratio of two logarithmic expressions containing cosine
  120. The 𝔪-WG° inverse in the Minkowski space
  121. Stability result for Lord Shulman swelling porous thermo-elastic soils with distributed delay term
  122. Approximate solvability method for nonlocal impulsive evolution equation
  123. Construction of a functional by a given second-order Ito stochastic equation
  124. Global well-posedness of initial-boundary value problem of fifth-order KdV equation posed on finite interval
  125. On pomonoid of partial transformations of a poset
  126. New fractional integral inequalities via Euler's beta function
  127. An efficient Legendre-Galerkin approximation for the fourth-order equation with singular potential and SSP boundary condition
  128. Eigenfunctions in Finsler Gaussian solitons
  129. On a blow-up criterion for solution of 3D fractional Navier-Stokes-Coriolis equations in Lei-Lin-Gevrey spaces
  130. Some estimates for commutators of sharp maximal function on the p-adic Lebesgue spaces
  131. A preconditioned iterative method for coupled fractional partial differential equation in European option pricing
  132. A digital Jordan surface theorem with respect to a graph connectedness
  133. A quasi-boundary value regularization method for the spherically symmetric backward heat conduction problem
  134. The structure fault tolerance of burnt pancake networks
  135. Average value of the divisor class numbers of real cubic function fields
  136. Uniqueness of exponential polynomials
  137. An application of Hayashi's inequality in numerical integration
Downloaded on 1.11.2025 from https://www.degruyterbrill.com/document/doi/10.1515/math-2022-0571/html
Scroll to top button