Home Mathematics A class of strongly convergent subgradient extragradient methods for solving quasimonotone variational inequalities
Article Open Access

A class of strongly convergent subgradient extragradient methods for solving quasimonotone variational inequalities

  • Habib ur Rehman , Poom Kumam EMAIL logo , Murat Ozdemir , Isa Yildirim and Wiyada Kumam EMAIL logo
Published/Copyright: July 29, 2023
Become an author with De Gruyter Brill

Abstract

The primary goal of this research is to investigate the approximate numerical solution of variational inequalities using quasimonotone operators in infinite-dimensional real Hilbert spaces. In this study, the sequence obtained by the proposed iterative technique for solving quasimonotone variational inequalities converges strongly toward a solution due to the viscosity-type iterative scheme. Furthermore, a new technique is proposed that uses an inertial mechanism to obtain strong convergence iteratively without the requirement for a hybrid version. The fundamental benefit of the suggested iterative strategy is that it substitutes a monotone and non-monotone step size rule based on mapping (operator) information for its Lipschitz constant or another line search method. This article also provides a numerical example to demonstrate how each method works.

MSC 2010: 65Y05; 65K15; 68W10; 47H05; 47H10

1 Introduction

The primary goal of this article is to investigate the iterative methods used to approximate the solution of the variational inequality problem (VIP) using quasimonotone operators in any real Hilbert space. Let be a real Hilbert space and be a nonempty convex, closed subset of . Let K : be an operator. A problem (VIP) for K on is described as follows [28]:

(VIP) Find r such that K ( r ) , y r 0 , y .

To validate the strong convergence, it is assumed that the following requirements are met:

( K 1) The solution set for problem (VIP) is denoted by VI ( , K ) and it is nonempty.

( K 2) An operator K : is said to be quasimonotone such that

K ( y 1 ) , y 2 y 1 > 0 K ( y 2 ) , y 2 y 1 0 , y 1 , y 2 .

( K 3) An operator K : is said to be Lipschitz continuous with constant L > 0 such that

K ( y 1 ) K ( y 2 ) L y 1 y 2 , y 1 , y 2 .

( K 4) An operator K : is said to be sequentially weakly continuous, i.e., { K ( u n ) } weakly converges to K ( u ) because each sequence { u n } weakly converges to u .

It is well recognized that the problem (VIP) is a crucial problem in non-linear analysis. It is a key mathematical model that integrates a number of important concepts in applied mathematics, such as a non-linear system of equations, optimization conditions for problems with the optimization process, complementarity problems, network equilibrium problems, and finance (see for more details [15,1821, 26,30]). As a result, this concept has several applications in mathematical programming, engineering, transportation analysis, network economics, game theory, and computer science.

The regularized approach and the projection method are two popular and generic methods for solving variational inequalities. It should also be mentioned that the first technique is most usually used to deal with variational inequalities accompanied by the monotone operator class. In this method, the regularized subproblem is strongly monotone, and its unique solution is obtained more conveniently than the initial problem. In this article, we will look into projection methods that are well known for their ease of numerical computation.

Furthermore, projection techniques can be used to find a numerical solution to variational inequalities. Many researchers have created original projection methods to solve various types of variational inequalities (for more details, see [58,12,14,17,22,23,25,27,29,31,38] and others in [3,4,911,16,3236]). All techniques for resolving the (VIP) problem are focused on computing a projection on the appropriate set . Korpelevich [22] and Antipin [1] introduced the equivalent extragradient method. Their method is as follows:

(1) u 1 , y n = P [ u n ρ K ( u n ) ] , u n + 1 = P [ u n ρ K ( y n ) ] ,

where 0 < ρ < 1 L . In keeping with the previous technique, we used two projections on the underlying set for each iteration. Indeed, if the feasible set has a complicated structure, the method’s computing efficiency may decrease. This section will go through various methods to obtain through this limitation. In the study by Censor et al. [12], the subgradient extragradient technique was first used. The following strategy is used in this technique:

(2) u 1 , y n = P [ u n ρ K ( u n ) ] , u n + 1 = P n [ u n ρ K ( y n ) ] ,

where 0 < ρ < 1 L through

n = { z : u n ρ K ( u n ) y n , z y n 0 } .

Tseng’s extragradient technique [31], which uses only one projection every iteration, is another notable method that does not require two projections. The following strategy is used in this technique:

(3) u 1 , y n = P [ u n ρ K ( u n ) ] , u n + 1 = y n + ρ [ K ( u n ) K ( y n ) ] ,

where 0 < ρ < 1 L . It is important to note that the previous techniques have two main flaws: a fixed step size rule that is reliant on the Lipschitz modulus of the cost operator and a weakly converging iterative procedure. The Lipschitz modulus is frequently uncertain or difficult to calculate. A fixed step size limitation that impacts the method’s efficacy and speed of convergence may be difficult to explain theoretically. Additionally, in the situation of an infinite-dimensional Hilbert space, the investigation of a strongly convergent iterative sequence is crucial.

The gradient projection technique was the first well-established projection method for determining variational inequalities, and it was followed by numerous additional projection methods, including the well-known extragradient approach [22], the subgradient extragradient methods [12,13], and others [14,17,25, 31,39]. The aforementioned methods are used to solve variational inequalities using monotone, strongly monotone, or inverse monotone. Furthermore, while generating approximation solutions and determining their convergence, fixed or variable step sizes frequently depend on the Lipschitz constants of the operators. This can limit implementations since, in some cases, some parameters are unknown or impossible to estimate.

The purpose of this research is to look at variational inequalities using quasimonotone operators in infinite-dimensional Hilbert spaces. Furthermore, this study shows that the iterative sequences generated by all four subgradient extragradient algorithms strongly converge to a solution. Subgradient extragradient methods use both monotone and non-monotone variable step size rules. The study of inertial algorithms is also presented, which typically enhances the efficiency of the iterative sequence. The article’s main contribution is that it investigates explicit monotone and non-monotone step size rules using inertial schemes and achieves strong convergence.

This article is written as follows. Section 2 provides preliminary results. Section 3 describes four novel methods and their convergence analysis. Finally, Section 4 provides some numerical findings to explain the practical efficiency of the proposed methods.

2 Preliminaries

This section contains various important identities as well as significant lemmas. Let us define the following set:

VI ( , K ) + = { r : ( r ) , y r > 0 , y } .

For any u , y , we have

u + y 2 = u 2 + 2 u , y + y 2 .

A metric projection P ( y 1 ) of y 1 is described by:

P ( y 1 ) = arg min { y 1 y 2 : y 2 } .

Lemma 2.1

[2] Suppose that P : is a metric projection. Then, the following conditions are satisfied:

  1. e 3 = P ( e 1 ) if and only if

    e 1 e 3 , e 2 e 3 0 , e 2 ,

  2. e 1 P ( e 2 ) 2 + P ( e 2 ) e 2 2 e 1 e 2 2 , e 1 , e 2 Σ ,

  3. e 1 P ( e 1 ) e 1 e 2 , e 2 , e 1 Σ .

Lemma 2.2

[37] Let { p n } [ 0 , + ) be a sequence such that

p n + 1 ( 1 q n ) p n + q n r n , n N .

Moreover, two sequences { q n } ( 0 , 1 ) and { r n } R such that

lim n + q n = 0 , n = 1 + q n = + and limsup n + r n 0 .

Then, lim n + p n = 0 .

Lemma 2.3

[24] Let { p n } be a real sequence, and there exists a subsequence { n i } of { n } such that

p n i < p n i + 1 for a l l i N .

Then, there exists a non-decreasing sequence m k N such that m k + as k + , and satisfying the following inequality for k N :

p m k p m k + 1 and p k p m k + 1 .

Indeed, m k = max { j k : p j p j + 1 } .

3 Main results

In this section, we propose four new methods to solve quasimonotone variational inequalities in a real Hilbert space and prove strong convergence results for the proposed method. The first and second methods involve a monotonic self-adaptive step rule to make the algorithm independent of the Lipschitz constant. Let g : be a strict contraction function through constant ξ [ 0 , 1 ) . The main algorithm is as follows:

Algorithm 1 (Explicit monotonic viscosity-type subgradient extragradient method)
Step 0: Let u 1 , μ ( 0 , 1 ) and ρ 1 > 0 . Moreover, sequence { γ n } ( 0 , 1 ) such that
lim n + γ n = 0 and n = 1 + γ n = + .
Step 1: Compute
y n = P ( u n ρ n K ( u n ) ) .
If u n = y n , then STOP. Otherwise, go to Step 2.
Step 2: Construct a set n = { z : u n ρ n K ( u n ) y n , z y n 0 } and evaluate
t n = P n ( u n ρ n K ( y n ) ) .
Step 3: Calculate
u n + 1 = γ n g ( u n ) + ( 1 γ n ) t n .
Step 4: Calculate
ρ n + 1 = min ρ n , μ u n y n K ( u n ) K ( y n ) if K ( u n ) K ( y n ) , ρ n else. ( 4 )
Set n n + 1s and go back to Step 1.

Lemma 3.1

A step size sequence { ρ n } generated in (4) is decreasing monotonically with a lower bound min { μ L , ρ 0 } and converges to a fixed ρ > 0 .

Proof

It is obvious that { ρ n } is a monotone and non-increasing sequence. It is given that operator K is Lipschitz continuous with a constant L > 0 such that

K ( u n ) K ( y n ) L u n y n .

Let K ( u n ) K ( y n ) such that

(7) μ u n y n K ( u n ) K ( y n ) μ u n y n L u n y n μ L .

As a result of the aforementioned expression, { ρ n } has a lower bound of min { μ L , ρ 0 } . Moreover, there exists ρ > 0 such that lim n ρ n = ρ .

Algorithm 2 (Inertial monotonic explicit subgradient extragradient method)
Step 0: Let u 0 , u 1 , μ ( 0 , 1 ) and ρ 1 > 0 . Moreover, { γ n } ( 0 , 1 ) such that
lim n + γ n = 0 and n = 1 + γ n = + .
Step 1: Evaluate s n = u n + χ n ( u n u n 1 ) γ n [ u n + χ n ( u n u n 1 ) ] , where χ n such that
0 χ n χ n ˆ and χ n ˆ = min χ 2 , ε n u n u n 1 if u n u n 1 , χ 2 else , ( 5 )
with positive sequence ε n = ( γ n ) such that lim n + ε n γ n = 0 .
Step 2: Compute
y n = P ( s n ρ n K ( s n ) ) .
If u n = y n , then STOP. Otherwise, go to Step 3.
Step 3: Construct a set n = { z : s n ρ n K ( s n ) y n , z y n 0 } and evaluate
u n + 1 = P n ( s n ρ n K ( y n ) ) .
Step 4: Calculate
ρ n + 1 = min ρ n , μ s n y n K ( s n ) K ( y n ) if K ( s n ) K ( y n ) , ρ n else. ( 6 )
Set n n + 1 and go back to Step 1.
Algorithm 3 (Non-monotonic explicit viscosity-type subgradient extragradient method)
Step 0: Let u 1 , ρ 1 > 0 , μ ( 0 , 1 ) and choose a non-negative real sequence { φ n } such that n φ n < + . Moreover, { γ n } ( 0 , 1 ) such that
lim n + γ n = 0 and n = 1 + γ n = + .
Step 1: Compute
y n = P ( u n ρ n K ( u n ) ) .
If u n = y n , then STOP. Otherwise, go to Step 2.
Step 2: Construct a set n = { z : u n ρ n K ( u n ) y n , z y n 0 } and evaluate
t n = P n ( u n ρ n K ( y n ) ) .
Step 3: Calculate u n + 1 = γ n g ( u n ) + ( 1 γ n ) t n .
Step 4: Calculate
ρ n + 1 = min ρ n + φ n , μ u n y n K ( u n ) K ( y n ) if K ( u n ) K ( y n ) , ρ n + φ n else. ( 8 )
Set n n + 1 and go back to Step 1.
Algorithm 4 (Inertial non-monotonic explicit subgradient extragradient method)
Step 0: Let u 0 , u 1 , μ ( 0 , 1 ) , ρ 1 > 0 and choose a non-negative real sequence { φ n } such that n φ n < + . Moreover, { γ n } ( 0 , 1 ) such that
lim n + γ n = 0 and n = 1 + γ n = + .
Step 1: Evaluate s n = u n + χ n ( u n u n 1 ) γ n [ u n + χ n ( u n u n 1 ) ] , where χ n such that
0 χ n χ ˆ n and χ ˆ n = min χ 2 , ε n u n u n 1 if u n u n 1 , χ 2 else , (9)
with positive sequence ε n = ( γ n ) such that lim n + ε n γ n = 0 .
Step 2: Compute
y n = P ( s n ρ n K ( s n ) ) .
If u n = y n , then STOP. Otherwise, go to Step 3.
Step 3: Construct a set n = { z : s n ρ n K ( s n ) y n , z y n 0 } and evaluate
u n + 1 = P n ( s n ρ n K ( y n ) ) .
Step 4: Calculate
ρ n + 1 = min ρ n + φ n , μ s n y n K ( s n ) K ( y n ) if K ( s n ) K ( y n ) , ρ n + φ n else. ( 10 )
Set n n + 1 and go back to Step 1.

Lemma 3.2

A sequence { ρ n } generated by expression (8) is convergent to ρ and satisfying the following inequality:

min μ L , ρ 1 ρ ρ 1 + P where P = n = 1 + φ n .

Proof

Due to the Lipschitz continuity of a mapping K , there exists a fixed number L > 0 . Let K ( u n ) K ( y n ) such that

(11) μ u n y n K ( u n ) K ( y n ) μ u n y n L u n y n μ L .

By using mathematical induction on the definition of ρ n + 1 , we have

min μ L , ρ 1 ρ n ρ 1 + P .

Let

[ ρ n + 1 ρ n ] + = max { 0 , ρ n + 1 ρ n }

and

[ ρ n + 1 ρ n ] = max { 0 , ( ρ n + 1 ρ n ) } .

From the definition of { ρ n } , we have

(12) n = 1 + ( ρ n + 1 ρ n ) + = n = 1 + max { 0 , ρ n + 1 ρ n } P < + .

That is, the series n = 1 + ( ρ n + 1 ρ n ) + is convergent. Next, we need to prove the convergence of n = 1 + ( ρ n + 1 ρ n ) . Let n = 1 + ( ρ n + 1 ρ n ) = + , due to the reason that

ρ n + 1 ρ n = ( ρ n + 1 ρ n ) + ( ρ n + 1 ρ n ) .

Thus, we have

(13) ρ k + 1 ρ 1 = n = 0 k ( ρ n + 1 ρ n ) = n = 0 k ( ρ n + 1 ρ n ) + n = 0 k ( ρ n + 1 ρ n ) .

By allowing k + in expression (13), we have ρ k as k . This is a contradiction. Due to the convergence of n = 0 k ( ρ n + 1 ρ n ) + and n = 0 k ( ρ n + 1 ρ n ) as k + in expression (13), we obtain lim n ρ n = ρ . This completes the proof.□

Lemma 3.3

Let the mapping K : satisfy conditions ( K 1)–( K 4). For any r VI ( , K ) + , we have

t n r 2 u n r 2 1 μ ρ n ρ n + 1 u n y n 2 1 μ ρ n ρ n + 1 t n y n 2 .

Proof

Let us consider that

(14) t n r 2 = P n [ u n ρ n K ( y n ) ] r 2 = P n [ u n ρ n K ( y n ) ] + [ u n ρ n K ( y n ) ] [ u n ρ n K ( y n ) ] r 2 = [ u n ρ n K ( y n ) ] r 2 + P n [ u n ρ n K ( y n ) ] [ u n ρ n K ( y n ) ] 2 + 2 P n [ u n ρ n K ( y n ) ] [ u n ρ n K ( y n ) ] , [ u n ρ n K ( y n ) ] r .

By using r VI ( , K ) + n , we obtain

(15) P n [ u n ρ n K ( y n ) ] [ u n ρ n K ( y n ) ] 2 + P n [ u n ρ n K ( y n ) ] [ u n ρ n K ( y n ) ] , [ u n ρ n K ( y n ) ] r = [ u n ρ n K ( y n ) ] P n [ u n ρ n K ( y n ) ] , r P n [ u n ρ n K ( y n ) ] 0 ,

which implies that

(16) P n [ u n ρ n K ( y n ) ] [ u n ρ n K ( y n ) ] , [ u n ρ n K ( y n ) ] r P n [ u n ρ n K ( y n ) ] [ u n ρ n K ( y n ) ] 2 .

Combining (14) and (16), we obtain

(17) t n r 2 u n ρ n K ( y n ) r 2 P n [ u n ρ n K ( y n ) ] [ u n ρ n K ( y n ) ] 2 u n r 2 u n t n 2 + 2 ρ n K ( y n ) , r t n .

Since r VI ( , K ) + , we have

K ( r ) , y r > 0 , for all y .

Thus, the aforementioned expression implies that

K ( y ) , y r 0 , for all y .

By using y = y n , we obtain

K ( y n ) , y n r 0 .

Thus, we have

(18) K ( y n ) , r t n = K ( y n ) , r y n + K ( y n ) , y n t n K ( y n ) , y n t n .

Combining expressions (17) and (18), we obtain

(19) t n r 2 u n r 2 u n t n 2 + 2 ρ n K ( y n ) , y n t n u n r 2 u n y n + y n t n 2 + 2 ρ n K ( y n ) , y n t n u n r 2 u n y n 2 y n t n 2 + 2 u n ρ n K ( y n ) y n , t n y n .

Note that t n = P n [ u n ρ n K ( y n ) ] implies that

(20) 2 u n ρ n K ( y n ) y n , t n y n = 2 u n ρ n K ( u n ) y n , t n y n + 2 ρ n K ( u n ) K ( y n ) , t n y n ρ n ρ n + 1 2 ρ n + 1 K ( u n ) K ( y n ) t n y n μ ρ n ρ n + 1 u n y n 2 + μ ρ n ρ n + 1 t n y n 2 .

Combining expressions (19) and (20), we obtain

(21)□ t n r 2 u n r 2 u n y n 2 y n t n 2 + ρ n ρ n + 1 [ μ u n y n 2 + μ t n y n 2 ] u n r 2 1 μ ρ n ρ n + 1 u n y n 2 1 μ ρ n ρ n + 1 t n y n 2 .

Lemma 3.4

Let the mapping K : satisfy conditions ( K 1)–( K 4). If there exists a subsequence { u n k } weakly convergent to u ˆ and lim k u n k y n k = 0 , then u ˆ VI ( , K ) .

Proof

Since { u n k } is weakly convergent to u ˆ and lim k u n k y n k = 0 , the sequence { y n k } is weakly convergent to u ˆ . Next, we need to show that u ˆ VI ( , K ) . Thus, we have

y n k = P [ u n k ρ n k K ( u n k ) ] ,

which is equivalent to

(22) u n k ρ n k K ( u n k ) y n k , y y n k 0 , y .

The aforementioned inequality implies that

(23) u n k y n k , y y n k ρ n k K ( u n k ) , y y n k , y .

Thus, we obtain

(24) 1 ρ n k u n k y n k , y y n k + K ( u n k ) , y n k u n k K ( u n k ) , y u n k , y .

By the use of min { μ L , ρ 1 } ρ ρ 1 and { u n k } is a bounded sequence. It is given that lim k u n k y n k = 0 and k in (24), we obtain

(25) liminf k K ( u n k ) , y u n k 0 , y .

Moreover, we have

(26) K ( y n k ) , y y n k = K ( y n k ) K ( u n k ) , y u n k + K ( u n k ) , y u n k + K ( y n k ) , u n k y n k .

Since lim k u n k y n k = 0 and K is L -Lipschitz continuity on , we obtain

(27) lim k K ( u n k ) K ( y n k ) = 0 .

From expressions (26) and (27), we obtain

(28) liminf k K ( y n k ) , y y n k 0 , y .

For the continuity of demonstration, let us take a positive sequence { ε k } that is decreasing to zero. For every { ε k } , we denote by m k the positive smallest integer in order that

(29) K ( u n i ) , y u n i + ε k > 0 , i m k .

Since { ε k } is decreasing, it is easy to observe that the sequence { m k } is increasing.

Case I: If there exists subsequence { u n m k j } of { u n m k } such that K ( u n m k j ) = 0 ( j ). Let j , we obtain

(30) K ( u ˆ ) , y u ˆ = lim j K ( u n m k j ) , y u ˆ = 0 .

Thus, u ˆ and imply that u ˆ VI ( , K ) .

Case II: If there exists a fixed number N 0 N such that for all n m k N 0 , K ( u n m k ) 0 . Consider that

(31) ϒ n m k = K ( u n m k ) K ( u n m k ) 2 , n m k N 0 .

Due to the aforementioned definition, we obtain

(32) K ( u n m k ) , ϒ n m k = 1 , n m k N 0 .

Moreover, using expressions (29) and (32), for all n m k N 0 , we have

(33) K ( u n m k ) , y + ε k ϒ n m k u n m k > 0 .

Since K is quasimonotone, then

(34) K ( y + ε k ϒ n m k ) , y + ε k ϒ n m k u n m k > 0 .

For all n m k N 0 , we have

(35) K ( y ) , y u n m k K ( y ) K ( y + ε k ϒ n m k ) , y + ε k ϒ n m k u n m k ε k K ( y ) , ϒ n m k .

Since { u n k } converges weakly to u ˆ through K is weakly continuous on the set , we obtain { K ( u n k ) } converges weakly to K ( u ˆ ) . Let K ( u ˆ ) 0 , we have

(36) K ( u ˆ ) liminf k K ( u n k ) .

Since { u n m k } { u n k } and lim k ε k = 0 , we have

(37) 0 lim k ε k ϒ n m k = lim k ε k K ( u n m k ) 0 K ( u ˆ ) = 0 .

Next, letting k in expression (35), we obtain

(38) K ( y ) , y u ˆ 0 , y .

Consider the case when u is an arbitrary element and 0 < ρ 1 . Thus, we have

(39) u ˆ ρ = ρ u + ( 1 ρ ) u ˆ .

Then, u ˆ ρ , and from expression (38), we have

(40) ρ K ( u ˆ ρ ) , u u ˆ 0 .

Hence, we have

(41) K ( u ˆ ρ ) , u u ˆ 0 .

It is clear from equation (41) that

(42) K ( u ˆ ) , u u ˆ 0 .

Hence, u ˆ VI ( , K ) . This completes the proof of lemma.□

Theorem 3.5

Let the mapping K : satisfy conditions ( K 1)–( K 4). Then, { u n } generated by the Algorithm 1 strongly converges to r = P VI ( , K ) g ( r ) .

Proof

Since ρ n ρ , there exists a fixed number ε ( 0 , 1 μ ) such that

lim n 1 μ ρ n ρ n + 1 = 1 μ > ε > 0 .

Therefore, there exists a fixed number M 1 N such that

(43) 1 μ ρ n ρ n + 1 > ε > 0 , n M 1 .

From expression (21), we obtain

(44) t n r 2 u n r 2 , n M 1 .

It is given that r Ω and due to the fact that g is a contraction with ξ [ 0 , 1 ) , we have

(45) u n + 1 r = γ n g ( u n ) + ( 1 γ n ) t n r = γ n [ g ( u n ) r ] + ( 1 γ n ) [ t n r ] = γ n [ g ( u n ) + g ( r ) g ( r ) r ] + ( 1 γ n ) [ t n r ] γ n g ( u n ) g ( r ) + γ n g ( r ) r + ( 1 γ n ) t n r γ n ξ u n r + γ n g ( r ) r + ( 1 γ n ) t n r .

Combining expressions (44) and (45) and γ n ( 0 , 1 ) , we obtain

(46) u n + 1 r γ n ξ u n r + γ n g ( r ) r + ( 1 γ n ) u n r = [ 1 γ n + ξ γ n ] u n r + γ n ( 1 ξ ) g ( r ) r ( 1 ξ ) max u n r , g ( r ) r ( 1 ξ ) max u M 1 r , g ( r ) r ( 1 ξ ) .

Therefore, we conclude that { u n } is a bounded sequence. By using expression (21), we have

(47) u n + 1 r 2 = γ n g ( u n ) + ( 1 γ n ) t n r 2 = γ n [ g ( u n ) r ] + ( 1 γ n ) [ t n r ] 2 = γ n g ( u n ) r 2 + ( 1 γ n ) t n r 2 γ n ( 1 γ n ) g ( u n ) t n 2 γ n g ( u n ) r 2 + ( 1 γ n ) u n r 2 1 μ ρ n ρ n + 1 u n y n 2 1 μ ρ n ρ n + 1 t n y n 2 γ n ( 1 γ n ) g ( u n ) t n 2 γ n g ( u n ) r 2 + u n r 2 ( 1 γ n ) 1 μ ρ n ρ n + 1 u n y n 2 ( 1 γ n ) 1 μ ρ n ρ n + 1 t n y n 2 .

The aforementioned expression implies that

(48) ( 1 γ n ) 1 μ ρ n ρ n + 1 u n y n 2 + ( 1 γ n ) 1 μ ρ n ρ n + 1 t n y n 2 γ n g ( u n ) r 2 + u n r 2 u n + 1 r 2 .

By using expression (44), we obtain

(49) u n + 1 r 2 = γ n g ( u n ) + ( 1 γ n ) t n r 2 = γ n [ g ( u n ) r ] + ( 1 γ n ) [ t n r ] 2 ( 1 γ n ) 2 t n r 2 + 2 γ n g ( u n ) r , ( 1 γ n ) [ t n r ] + γ n [ g ( u n ) r ] = ( 1 γ n ) 2 t n r 2 + 2 γ n g ( u n ) g ( r ) + g ( r ) r , u n + 1 r = ( 1 γ n ) 2 t n r 2 + 2 γ n g ( u n ) g ( r ) , u n + 1 r + 2 γ n g ( r ) r , u n + 1 r ( 1 γ n ) 2 t n r 2 + 2 γ n ξ u n r u n + 1 r + 2 γ n g ( r ) r , u n + 1 r ( 1 + γ n 2 2 γ n ) u n r 2 + 2 γ n ξ u n r 2 + 2 γ n g ( r ) r , u n + 1 r = ( 1 2 γ n ) u n r 2 + γ n 2 u n r 2 + 2 γ n ξ u n r 2 + 2 γ n g ( r ) r , u n + 1 r = [ 1 2 γ n ( 1 ξ ) ] u n r 2 + 2 γ n ( 1 ξ ) γ n u n r 2 2 ( 1 ξ ) + g ( r ) r , u n + 1 r 1 ξ .

Case 1: Let us consider M 2 N ( M 2 M 1 ) such that

(50) u n + 1 r u n r , n M 2 .

Then, lim n u n r exists. Let lim n u n r = l . By the use of expression (48), we obtain

(51) ( 1 γ n ) 1 μ ρ n ρ n + 1 u n y n 2 + ( 1 γ n ) 1 μ ρ n ρ n + 1 t n y n 2 γ n g ( u n ) r 2 + u n r 2 u n + 1 r 2 .

Since lim n u n r exists and γ n 0 , we obtain

(52) lim n u n y n = lim n t n y n = 0 .

By using the aforementioned results, we obtain

(53) lim n u n t n lim n u n y n + lim n y n t n = 0 .

It further implies that

(54) u n + 1 u n = γ n g ( u n ) + ( 1 γ n ) t n u n = γ n [ g ( u n ) u n ] + ( 1 γ n ) [ t n u n ] γ n g ( u n ) u n + ( 1 γ n ) t n u n 0 .

The aforementioned term specifically includes that

(55) lim n u n + 1 u n = 0 .

By using Lemma 3.4, we obtain

(56) limsup n g ( r ) r , u n r = limsup k g ( r ) r , u n k r = g ( r ) r , u ˆ r 0 .

By the use of lim n u n + 1 u n = 0 , we may deduce that

(57) limsup n g ( r ) r , u n + 1 r limsup n g ( r ) r , u n + 1 u n + limsup n g ( r ) r , u n r 0 .

It is evident from expressions (49) and (57) that we are going to have it

(58) limsup n γ n u n r 2 2 ( 1 ξ ) + g ( r ) r , u n + 1 r 1 ξ 0 .

By choosing n M 3 N ( M 3 M 2 ) large enough such that 2 γ n ( 1 ξ ) < 1 , and by the use of (49), (58), and through Lemma 2.2, we conclude that u n r 0 as n .

Case 2: Suppose that there exists a subsequence { n i } of { n } such that

u n i r u n i + 1 r , i N .

From Lemma 2.3, there exists a sequence { m k } N as { m k } , such that

(59) u m k r u m k + 1 r and u k r u m k + 1 r , k N .

As in case 1, expression (47) implies that:

(60) ( 1 γ m k ) 1 μ ρ m k ρ m k + 1 u m k y m k 2 + ( 1 γ m k ) 1 μ ρ m k ρ m k + 1 t m k y m k 2 γ m k g ( u m k ) r 2 + u m k r 2 u m k + 1 r 2 .

By using γ m k 0 , we obtain the following result:

(61) lim k u m k y m k = lim k t m k y m k = 0 .

Next, we are going to obtain the following:

(62) u m k + 1 u m k = γ m k g ( u m k ) + ( 1 γ m k ) t m k u m k = γ m k [ g ( u m k ) u m k ] + ( 1 γ m k ) [ t m k u m k ] γ m k g ( u m k ) u m k + ( 1 γ m k ) t m k u m k 0 .

We use the same argument as in Case 1 such that

(63) limsup k g ( r ) r , u m k + 1 r 0 .

By using expressions (49) and (59), we have

(64) u m k + 1 r 2 [ 1 2 γ m k ( 1 ξ ) ] u m k r 2 + 2 γ m k ( 1 ξ ) γ m k u m k r 2 2 ( 1 ξ ) + g ( r ) r , u m k + 1 r 1 ξ [ 1 2 γ m k ( 1 ξ ) ] u m k + 1 r 2 + 2 γ m k ( 1 ξ ) γ m k u m k r 2 2 ( 1 ξ ) + g ( r ) r , u m k + 1 r 1 ξ .

It continues on from that

(65) u m k + 1 r 2 γ m k u m k r 2 2 ( 1 ξ ) + g ( r ) r , u m k + 1 r 1 ξ .

Since γ m k 0 and u m k r is a bounded sequence, then, expressions (63) and (65) indicate that

(66) u m k + 1 r 2 0 , as k .

The aforementioned equation means that

(67) lim k u k r 2 lim k u m k + 1 r 2 0 .

Consequently, u n r . This completes the proof of the theorem.□

Theorem 3.6

Let the mapping K : satisfy conditions ( K 1)–( K 4). Then, { u n } generated by the Algorithm 2 converges strongly to a solution r = P VI ( , K ) ( 0 ) .

Proof

By using the definition of { s n } , we obtain

(68) s n r = u n + χ n ( u n u n 1 ) γ n u n χ n γ n ( u n u n 1 ) r = ( 1 γ n ) ( u n r ) + ( 1 γ n ) χ n ( u n u n 1 ) γ n r

(69) ( 1 γ n ) u n r + ( 1 γ n ) χ n u n u n 1 + γ n r ( 1 γ n ) u n r + γ n K 1 ,

where

( 1 γ n ) χ n γ n u n u n 1 + r K 1 .

It is given that ρ n ρ such that there exists a finite number ( 0 , 1 μ ) such that

lim n 1 μ ρ n ρ n + 1 = 1 μ > > 0 .

Thus, there is N 1 N such that

(70) 1 μ ρ n ρ n + 1 > > 0 , n N 1 .

From Lemma 3.3, we may rewrite

(71) u n + 1 r 2 s n r 2 , n N 1 .

By the use of expressions (69) and (71), we obtain

(72) u n + 1 r ( 1 γ n ) u n r + γ n K 1 max { u n r , K 1 } max { u N 1 r , K 1 } .

As a result, we can conclude that { u n } is a bounded sequence. Indeed, by expression (69), we have

(73) s n r 2 ( 1 γ n ) 2 u n r 2 + γ n 2 K 1 2 + 2 K 1 γ n ( 1 γ n ) u n r u n r 2 + γ n [ γ n K 1 2 + 2 K 1 ( 1 γ n ) u n r ] u n r 2 + γ n K 2 ,

for some K 2 > 0 . Both expressions (21) and (73) imply that

(74) u n + 1 r 2 u n r 2 + γ n K 2 1 μ ρ n ρ n + 1 s n y n 2 1 μ ρ n ρ n + 1 u n + 1 y n 2 .

From expression (68), we can write

(75) s n r 2 = u n + χ n ( u n u n 1 ) γ n u n χ n γ n ( u n u n 1 ) r 2 = ( 1 γ n ) ( u n r ) + ( 1 γ n ) χ n ( u n u n 1 ) γ n r 2 ( 1 γ n ) ( u n r ) + ( 1 γ n ) χ n ( u n u n 1 ) 2 + 2 γ n r , s n r = ( 1 γ n ) 2 u n r 2 + ( 1 γ n ) 2 χ n 2 u n u n 1 2 + 2 χ n ( 1 γ n ) 2 u n r u n u n 1 + 2 γ n r , s n u n + 1 + 2 γ n r , u n + 1 r ( 1 γ n ) u n r 2 + χ n 2 u n u n 1 2 + 2 χ n ( 1 γ n ) u n r u n u n 1 + 2 γ n r s n u n + 1 + 2 γ n r , u n + 1 r = ( 1 γ n ) u n r 2 + γ n χ n u n u n 1 χ n γ n u n u n 1 + 2 ( 1 γ n ) u n r χ n γ n u n u n 1 + 2 r s n u n + 1 + 2 r , r u n + 1 .

From expressions (71) and (75), we obtain

(76) u n + 1 r 2 ( 1 γ n ) u n r 2 + γ n χ n u n u n 1 χ n γ n u n u n 1 + 2 ( 1 γ n ) u n r χ n γ n u n u n 1 + 2 r s n u n + 1 + 2 r , r u n + 1 .

Case 1: Consider that there exists a finite number N 2 N ( N 2 N 1 ) such that

(77) u n + 1 r u n r , n N 2 .

Thus, the aforementioned relation implies that lim n u n r exists and let lim n u n r = l , for l 0 . By using expression (74), we can rewrite

(78) 1 μ ρ n ρ n + 1 s n y n 2 + 1 μ ρ n ρ n + 1 u n + 1 y n 2 u n r 2 + γ n K 2 u n + 1 r 2 .

Since limit of u n r exists and γ n 0 , we can deduce that

(79) s n y n 0 and u n + 1 y n 0 as n .

It continues from expression (79) that

(80) lim n s n u n + 1 lim n s n y n + lim n y n u n + 1 = 0 .

Next, we need to evaluate

(81) s n u n = u n + χ n ( u n u n 1 ) γ n [ u n + χ n ( u n u n 1 ) ] u n χ n u n u n 1 + γ n u n + χ n γ n u n u n 1 = γ n χ n γ n u n u n 1 + γ n u n + γ n 2 χ n γ n u n u n 1 0 .

Thus, the aforementioned expression implies that

(82) lim n u n u n + 1 lim n u n s n + lim n s n u n + 1 = 0 .

From given r = P VI ( , K ) ( 0 ) , we have

(83) 0 r , y r 0 , y VI ( , K ) .

Moreover, it is considered that

(84) limsup n r , r u n = lim k r , r u n k = r , r u ˆ 0 .

By using the fact, lim n u n + 1 u n = 0 . Thus, using expression (84), we can deduce that

(85) limsup n r , r u n + 1 limsup n r , r u n + limsup n r , u n u n + 1 0 .

By using expressions (76) and (85) and taking Lemma 2.2 imply that u n r 0 as n .

Case 2: Let there exists a subsequence { n i } of { n } such that

u n i r u n i + 1 r , i N .

Thus, by using Lemma 2.3, there exists a sequence { m k } N as { m k } such that

(86) u m k r u m k + 1 r and u k r u m k + 1 r , for all k N .

Expression (78) implies that

(87) 1 μ ρ m k ρ m k + 1 s m k y m k 2 + 1 μ ρ m k ρ m k + 1 u m k + 1 y m k 2 u m k r 2 + γ m k K 2 u m k + 1 r 2 .

Due to sequence γ m k 0 , we deduce the following:

(88) lim k s m k y m k = lim k u m k + 1 y m k = 0 .

It follows that

(89) lim k u m k + 1 s m k lim k u m k + 1 y m k + lim k y m k s m k = 0 .

Next, we have to evaluate

(90) s m k u m k = u m k + α m k ( u m k u m k 1 ) γ m k [ u m k + α m k ( u m k u m k 1 ) ] u m k α m k u m k u m k 1 + γ m k u m k + α m k γ m k u m k u m k 1 = γ m k α m k γ m k u m k u m k 1 + γ m k u m k + γ m k 2 α m k γ m k u m k u m k 1 0 .

It follows that

(91) lim k u m k u m k + 1 lim k u m k s m k + lim k s m k u m k + 1 = 0 .

By using the same argument as in Case 1, such that

(92) limsup k r , r u m k + 1 0 .

Now, using expressions (76) and (86), we have

(93) u m k + 1 r 2 ( 1 γ m k ) u m k r 2 + γ m k α m k u m k u m k 1 α m k γ m k u m k u m k 1 + 2 ( 1 γ m k ) u m k r α m k γ m k u m k u m k 1 + 2 r s m k u m k + 1 + 2 r , r u m k + 1 ( 1 γ m k ) u m k + 1 r 2 + γ m k α m k u m k u m k 1 α m k γ m k u m k u m k 1 + 2 ( 1 γ m k ) u m k r α m k γ m k u m k u m k 1 + 2 r s m k u m k + 1 + 2 r , r u m k + 1 .

Thus, we obtain

(94) u m k + 1 r 2 α m k u m k u m k 1 α m k γ m k u m k u m k 1 + 2 ( 1 γ m k ) u m k r α m k γ m k u m k u m k 1 + 2 r s m k u m k + 1 + 2 r , r u m k + 1 .

Since γ m k 0 and u m k r is a bounded sequence, then expressions (92) and (94) imply that

(95) u m k + 1 r 2 0 , as k .

It implies that

(96) lim n u k r 2 lim n u m k + 1 r 2 0 .

As a consequence, u n r . This will complete the proof of the theorem.□

Theorem 3.7

Let the mapping K : satisfy conditions ( K 1)–( K 4). Then, { u n } generated by Algorithm 3 converges strongly to a solution of the problem VIP.

Proof

The proof is the same as the proof of Theorem 3.5.□

Theorem 3.8

Let the mapping K : meet conditions ( K 1)–( K 4). Then, { u n } generated by Algorithm 4 converges strongly to a solution of the problem VIP.

Proof

The proof is the same as the proof of Theorem 3.6.□

4 Numerical illustrations

The numerical results of the proposed iterative schemes are given in this section, in contrast to some related work in the literature and also in the analysis of how variations in control parameters affect the numerical effectiveness of the proposed algorithms. All computations are done in MATLAB R2018b and run on an HP i5-6200 8.00 GB (7.78 GB usable) RAM laptop.

Example 4.1

Let H = l 2 be a real Hilbert space with sequences of real numbers satisfying the following condition:

(97) u 1 2 + u 2 2 + + u n 2 + < + .

Assume that K : is defined by:

G ( u ) = ( 5 u ) u , u H ,

where C = { u H : u 3 } . It is to note that K is sequentially weakly continuous on and VI ( , K ) = { 0 } . For each u , y , we have

(98) K ( u ) K ( y ) = ( 5 u ) u ( 5 y ) y = 5 ( u y ) u ( u y ) ( u y ) y 5 u y + u u y + u y y 5 u y + 3 u y + 3 u y 11 u y .

Hence, K is L -Lipschitz continuous with L = 11 . For any u , y , let K ( u ) , y u > 0 such that

( 5 u ) u , y u > 0 .

Since u 3 , it implies that

u , y u > 0 .

Thus, it implies that

(99) K ( y ) , y u = ( 5 y ) y , y u ( 5 y ) y , y u ( 5 y ) u , y u 2 u y 2 0 .

Thus, we show that K is quasimonotone on . Let u = ( 5 2 , 0 , 0 , , 0 , ) and y = ( 3 , 0 , 0 , , 0 , ) such that

K ( u ) K ( y ) , u y = ( 2.5 3 ) 2 < 0 .

The formula for a projection on C is given in the following manner:

P C ( u ) = u if u 3 , 3 u u , otherwise .

The following conditions have been taken for numerical study. (i) Algorithm 1 (Alg1): ρ 1 = 0.22 , μ = 0.44 , γ n = 1 ( n + 2 ) ; (ii) Algorithm 2 (Alg2): ρ 1 = 0.22 , μ = 0.44 , γ n = 1 ( n + 2 ) , p n = 100 ( n + 1 ) 2 ; (iii) Algorithm 3 (Alg3): ρ 1 = 0.22 , μ = 0.44 , χ = 0.50 , γ n = 1 ( n + 2 ) , ε n = 1 ( n + 1 ) 2 ; (iv) Algorithm 4 (Alg4): ρ 1 = 0.22 , μ = 0.44 , χ = 0.50 , γ n = 1 ( n + 2 ) , ε n = 1 ( n + 1 ) 2 , p n = 100 ( n + 1 ) 2 (Tables 1 and 2).

Table 1

Numerical values for Example 4.1

Number of iterations Execution time in seconds
u 1 Alg1 Alg3 Alg1 Alg3
( 2 , 2 , , 2 5,000 , 0 , 0 , ) 41 36 3.93847480000000 3.33763350000000
( 1 , 2 , , 5,000 , 0 , 0 , ) 57 48 4.87583390000000 4.24728740000000
( 5 , 5 , , 5 10,000 , 0 , 0 , ) 48 39 4.37341940000000 3.82418350000000
( 50 , 50 , , 5 0 10,000 , 0 , 0 , ) 61 49 5.84570940000000 4.57478350000000
( 500 , 500 , , 50 0 10,000 , 0 , 0 , ) 89 67 8.34746348000000 6.92528350000000
Table 2

Numerical values for Example 4.1

Number of iterations Execution time in seconds
u 1 Alg2 Alg4 Alg2 Alg4
( 2 , 2 , , 2 5,000 , 0 , 0 , ) 29 21 2.14638330000000 2.00018330000000
( 1 , 2 , , 5,000 , 0 , 0 , ) 33 27 2.87463850000000 2.14639210000000
( 5 , 5 , , 5 10,000 , 0 , 0 , ) 30 21 2.72444340000000 1.92738850000000
( 50 , 50 , , 5 0 10,000 , 0 , 0 , ) 41 34 4.56044940000000 3.93847350000000
( 500 , 500 , , 50 0 10,000 , 0 , 0 , ) 47 38 7.46292840000000 5.46293350000000

5 Conclusion

We developed various types of explicit extragradient-type methods for finding a numerical solution to quasimonotone variational inequalities in a real Hilbert space. This approach is seen as a variant of the two-step extragradient method. Two strongly convergent findings are well proven and correspond to the suggested methods. The numerical findings were analyzed to demonstrate the numerical performance of the suggested methods. These computational results show that the non-monotone variable step size rule continues to improve the effectiveness of the iterative sequence in this scenario.

  1. Funding information: This research was supported by The Science, Research and Innovation Promotion Funding (TSRI) (Grant No. FRB660012/0168). This research block grants were managed under Rajamangala University of Technology Thanyaburi (FRB66E0653S.2).

  2. Author contributions: All the authors contributed equally to this manuscript.

  3. Conflict of interest: No potential conflict of interest was reported by the authors.

References

[1] A. S. Antipin, On a method for convex programs using a symmetrical modification of the Lagrange function, Ekonomika i Matematicheskie Metody 12 (1976), no. 6, 1164–1173. Search in Google Scholar

[2] H. H. Bauschke and P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces, vol. 408, Springer, New York, 2011. 10.1007/978-1-4419-9467-7Search in Google Scholar

[3] L. C. Ceng, A. Petruşel, X. Qin, and J. C. Yao, Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints, Optimization 70 (2020), no. 5–6, 1337–1358. 10.1080/02331934.2020.1858832Search in Google Scholar

[4] L. C. Ceng, A. Petruel, X. Qin, and J. C. Yao, Pseudomonotone variational inequalities and fixed points, Fixed Point Theory 22 (2021), no. 2, 543–558. 10.24193/fpt-ro.2021.2.36Search in Google Scholar

[5] L. C. Ceng, Two inertial linesearch extragradient algorithms for the bilevel split pseudomonotone variational inequality with constraints, J. Appl. Numer. Optim. 2 (2020), no. 2, 213–233. 10.23952/jano.2.2020.2.07Search in Google Scholar

[6] L. C. Ceng, A. Petruel, X. Qin, and J. C. Yao, A modified inertial subgradient extragradient method for solving pseudomonotone variational inequalities and common fixed point problems, Fixed Point Theory 21 (2020), no. 1, 93–108. 10.24193/fpt-ro.2020.1.07Search in Google Scholar

[7] L.-C. Ceng, A. Petrusel, J.-C. Yao, and Y. Yao, Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces, Fixed Point Theory 19 (2018), no. 2, 487–502. 10.24193/fpt-ro.2018.2.39Search in Google Scholar

[8] L.-C. Ceng, A. Petrusel, J.-C. Yao, and Y. Yao, Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions, Fixed Point Theory 20 (2019), no. 1, 113–134. Search in Google Scholar

[9] L.-C. Ceng and M. Shang, Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings, Optimization 70 (2019), no. 4, 715–740. 10.1080/02331934.2019.1647203Search in Google Scholar

[10] L.-C. Ceng and M. Shang, Composite extragradient implicit rule for solving a hierarch variational inequality with constraints of variational inclusion and fixed point problems, J. Inequalit. Appl. 2020 (2020), no. 1, 19. 10.1186/s13660-020-2306-1Search in Google Scholar

[11] L.-C. Ceng and Q. Yuan, Composite inertial subgradient extragradient methods for variational inequalities and fixed point problems, J. Inequalit. Appl. 2019 (2019), no. 1, 1–20. 10.1186/s13660-019-2229-xSearch in Google Scholar

[12] Y. Censor, A. Gibali, and S. Reich, The subgradient extragradient method for solving variational inequalities in Hilbert space, J. Optim. Theory Appl. 148 (2010), no. 2, 318–335. 10.1007/s10957-010-9757-3Search in Google Scholar PubMed PubMed Central

[13] Y. Censor, A. Gibali, and S. Reich, Strong convergence of subgradient extragradient methods for the variational inequality problem in hilbert space, Optim Methods Software. 26 (2011), no. 4–5, 827–845. 10.1080/10556788.2010.551536Search in Google Scholar

[14] Y. Censor, A. Gibali, and S. Reich, Extensions of Korpelevich extragradient method for the variational inequality problem in euclidean space, Optimization 61 (2012), no. 9, 1119–1132. 10.1080/02331934.2010.539689Search in Google Scholar

[15] C. M. Elliott, Variational and quasivariational inequalities applications to free–boundary ProbLems. (claudio baiocchi and antónio capelo), SIAM Review 29 (1987), no. 2, 314–315. 10.1137/1029059Search in Google Scholar

[16] L. He, Y.-L. Cui, L.-C. Ceng, T.-Y. Zhao, D.-Q. Wang, and H.-Y. Hu, Strong convergence for monotone bilevel equilibria with constraints of variational inequalities and fixed points using subgradient extragradient implicit rule, J. Inequalit. Appl. 2021 (2021), no. 1, 1–37. 10.1186/s13660-021-02683-ySearch in Google Scholar

[17] A. N. Iusem and B. F. Svaiter, A variant of Korpelevich’s method for variational inequalities with a new search strategy, Optimization 42 (1997), no. 4, 309–321. 10.1080/02331939708844365Search in Google Scholar

[18] G. Kassay, J. Kolumbán, and Z. Páles, On nash stationary points, Publicationes Mathematicae. 54 (1999), no. 3–4, 267–279. 10.5486/PMD.1999.1902Search in Google Scholar

[19] G. Kassay, J. Kolumbán, and Z. Páles, Factorization of minty and Stampacchia variational inequality systems, Eur. J. Operat. Res. 143 (2002), no. 2, 377–389. 10.1016/S0377-2217(02)00290-4Search in Google Scholar

[20] D. Kinderlehrer and G. Stampacchia, An introduction to variational inequalities and their applications, Society for Industrial and Applied Mathematics, Academic Press, New York, Jan 2000. 10.1137/1.9780898719451Search in Google Scholar

[21] I. Konnov, Equilibrium Models and Variational Inequalities, vol. 210, Elsevier, Amsterdam, 2007. Search in Google Scholar

[22] G. M. Korpelevich, The extragradient method for finding saddle points and other problems, Matecon 12 (1976), 747–756. Search in Google Scholar

[23] L. Liu, S. Y. Cho, and J.-C. Yao, Convergence analysis of an inertial Tseng’s extragradient algorithm for solving pseudomonotone variational inequalities and applications, J. Nonlinear Var. Anal. 5 (2021), no. 4, 627–644. Search in Google Scholar

[24] P.-E. Maingé, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization, Set-Valued Anal. 16 (2008), no. 7–8, 899–912. 10.1007/s11228-008-0102-zSearch in Google Scholar

[25] A. Moudafi, Viscosity approximation methods for fixed-points problems, J. Math. Anal. Appl. 241 (2000), no. 1, 46–55. 10.1006/jmaa.1999.6615Search in Google Scholar

[26] A. Nagurney, Network Economics, A variational inequality approach, Springer Dordrecht, New York, 1999. 10.1007/978-1-4757-3005-0_1Search in Google Scholar

[27] M. Aslam Noor, Some iterative methods for nonconvex variational inequalities, Comput. Math. Model. 21 (2010), no. 1, 97–108. 10.1007/s10598-010-9057-7Search in Google Scholar

[28] G. Stampacchia, Formes bilinéaires coercitives sur les ensembles convexes, Comptes Rendus Hebdomadaires Des Seances De L Academie Des Sciences 258 (1964), no. 18, 4413. Search in Google Scholar

[29] P. Sunthrayuth, H. ur Rehman, and P. Kumam, A modified Popov’s subgradient extragradient method for variational inequalities in Banach spaces, J. Nonlinear Funct. Anal. 2021 (2021), no. 1, Article ID 7. 10.23952/jnfa.2021.7Search in Google Scholar

[30] W. Takahashi, Introduction to Nonlinear and Convex Analysis, Yokohama Publishers, Yokohama, 2009. Search in Google Scholar

[31] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim. 38 (2000), no. 2, 431–446. 10.1137/S0363012998338806Search in Google Scholar

[32] H. ur Rehman, A. Gibali, P. Kumam, and K. Sitthithakerngkiet, Two new extragradient methods for solving equilibrium problems, Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A. Matemáticas. 115 (2021), no. 2, 75. 10.1007/s13398-021-01017-3Search in Google Scholar

[33] H. ur Rehman, P. Kumam, Y. Je Cho, and P. Yordsorn, Weak convergence of explicit extragradient algorithms for solving equilibirum problems, J. Inequalit. Appl. 2019 (2019), no. 1, 1–25. 10.1186/s13660-019-2233-1Search in Google Scholar

[34] H. ur Rehman, P. Kumam, A. Gibali, and W. Kumam, Convergence analysis of a general inertial projection-type method for solving pseudomonotone equilibrium problems with applications, J. Inequalit. Appl. 2021 (2021), no. 1, 1–27. 10.1186/s13660-021-02591-1Search in Google Scholar

[35] H. ur Rehman, P. Kumam, Y. Je Cho, Y. I. Suleiman, and W. Kumam, Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems, Optim Methods Software 36 (2020), 1–32. 10.1080/10556788.2020.1734805Search in Google Scholar

[36] H. ur Rehman, W. Kumam, P. Kumam, and M. Shutaywi, A new weak convergence non-monotonic self-adaptive iterative scheme for solving equilibrium problems, AIMS Mathematics 6 (2021), no. 6, 5612–5638. Search in Google Scholar

[37] H.-K. Xu, Another control condition in an iterative method for nonexpansive mappings, Bull. Aust. Math. Soc. 65 (2002), no. 1, 109–113. 10.1017/S0004972700020116Search in Google Scholar

[38] J. Yang, H. Liu, and Z. Liu, Modified subgradient extragradient algorithms for solving monotone variational inequalities, Optimization 67 (2018), no. 12, 2247–2258. 10.1080/02331934.2018.1523404Search in Google Scholar

[39] L. Zhang, C. Fang, and S. Chen, An inertial subgradient-type method for solving single-valued variational inequalities and fixed point problems, Numer. Algorithms 79 (2018), no. 3, 941–956. 10.1007/s11075-017-0468-9Search in Google Scholar

Received: 2021-06-29
Revised: 2022-11-16
Accepted: 2023-01-20
Published Online: 2023-07-29

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. A novel class of bipolar soft separation axioms concerning crisp points
  3. Duality for convolution on subclasses of analytic functions and weighted integral operators
  4. Existence of a solution to an infinite system of weighted fractional integral equations of a function with respect to another function via a measure of noncompactness
  5. On the existence of nonnegative radial solutions for Dirichlet exterior problems on the Heisenberg group
  6. Hyers-Ulam stability of isometries on bounded domains-II
  7. Asymptotic study of Leray solution of 3D-Navier-Stokes equations with exponential damping
  8. Semi-Hyers-Ulam-Rassias stability for an integro-differential equation of order 𝓃
  9. Jordan triple (α,β)-higher ∗-derivations on semiprime rings
  10. The asymptotic behaviors of solutions for higher-order (m1, m2)-coupled Kirchhoff models with nonlinear strong damping
  11. Approximation of the image of the Lp ball under Hilbert-Schmidt integral operator
  12. Best proximity points in -metric spaces with applications
  13. Approximation spaces inspired by subset rough neighborhoods with applications
  14. A numerical Haar wavelet-finite difference hybrid method and its convergence for nonlinear hyperbolic partial differential equation
  15. A novel conservative numerical approximation scheme for the Rosenau-Kawahara equation
  16. Fekete-Szegö functional for a class of non-Bazilevic functions related to quasi-subordination
  17. On local fractional integral inequalities via generalized ( h ˜ 1 , h ˜ 2 ) -preinvexity involving local fractional integral operators with Mittag-Leffler kernel
  18. On some geometric results for generalized k-Bessel functions
  19. Convergence analysis of M-iteration for 𝒢-nonexpansive mappings with directed graphs applicable in image deblurring and signal recovering problems
  20. Some results of homogeneous expansions for a class of biholomorphic mappings defined on a Reinhardt domain in ℂn
  21. Graded weakly 1-absorbing primary ideals
  22. The existence and uniqueness of solutions to a functional equation arising in psychological learning theory
  23. Some aspects of the n-ary orthogonal and b(αn,βn)-best approximations of b(αn,βn)-hypermetric spaces over Banach algebras
  24. Numerical solution of a malignant invasion model using some finite difference methods
  25. Increasing property and logarithmic convexity of functions involving Dirichlet lambda function
  26. Feature fusion-based text information mining method for natural scenes
  27. Global optimum solutions for a system of (k, ψ)-Hilfer fractional differential equations: Best proximity point approach
  28. The study of solutions for several systems of PDDEs with two complex variables
  29. Regularity criteria via horizontal component of velocity for the Boussinesq equations in anisotropic Lorentz spaces
  30. Generalized Stević-Sharma operators from the minimal Möbius invariant space into Bloch-type spaces
  31. On initial value problem for elliptic equation on the plane under Caputo derivative
  32. A dimension expanded preconditioning technique for block two-by-two linear equations
  33. Asymptotic behavior of Fréchet functional equation and some characterizations of inner product spaces
  34. Small perturbations of critical nonlocal equations with variable exponents
  35. Dynamical property of hyperspace on uniform space
  36. Some notes on graded weakly 1-absorbing primary ideals
  37. On the problem of detecting source points acting on a fluid
  38. Integral transforms involving a generalized k-Bessel function
  39. Ruled real hypersurfaces in the complex hyperbolic quadric
  40. On the monotonic properties and oscillatory behavior of solutions of neutral differential equations
  41. Approximate multi-variable bi-Jensen-type mappings
  42. Mixed-type SP-iteration for asymptotically nonexpansive mappings in hyperbolic spaces
  43. On the equation fn + (f″)m ≡ 1
  44. Results on the modified degenerate Laplace-type integral associated with applications involving fractional kinetic equations
  45. Characterizations of entire solutions for the system of Fermat-type binomial and trinomial shift equations in ℂn#
  46. Commentary
  47. On I. Meghea and C. S. Stamin review article “Remarks on some variants of minimal point theorem and Ekeland variational principle with applications,” Demonstratio Mathematica 2022; 55: 354–379
  48. Special Issue on Fixed Point Theory and Applications to Various Differential/Integral Equations - Part II
  49. On Cauchy problem for pseudo-parabolic equation with Caputo-Fabrizio operator
  50. Fixed-point results for convex orbital operators
  51. Asymptotic stability of equilibria for difference equations via fixed points of enriched Prešić operators
  52. Asymptotic behavior of resolvents of equilibrium problems on complete geodesic spaces
  53. A system of additive functional equations in complex Banach algebras
  54. New inertial forward–backward algorithm for convex minimization with applications
  55. Uniqueness of solutions for a ψ-Hilfer fractional integral boundary value problem with the p-Laplacian operator
  56. Analysis of Cauchy problem with fractal-fractional differential operators
  57. Common best proximity points for a pair of mappings with certain dominating property
  58. Investigation of hybrid fractional q-integro-difference equations supplemented with nonlocal q-integral boundary conditions
  59. The structure of fuzzy fractals generated by an orbital fuzzy iterated function system
  60. On the structure of self-affine Jordan arcs in ℝ2
  61. Solvability for a system of Hadamard-type hybrid fractional differential inclusions
  62. Three solutions for discrete anisotropic Kirchhoff-type problems
  63. On split generalized equilibrium problem with multiple output sets and common fixed points problem
  64. Special Issue on Computational and Numerical Methods for Special Functions - Part II
  65. Sandwich-type results regarding Riemann-Liouville fractional integral of q-hypergeometric function
  66. Certain aspects of Nörlund -statistical convergence of sequences in neutrosophic normed spaces
  67. On completeness of weak eigenfunctions for multi-interval Sturm-Liouville equations with boundary-interface conditions
  68. Some identities on generalized harmonic numbers and generalized harmonic functions
  69. Study of degenerate derangement polynomials by λ-umbral calculus
  70. Normal ordering associated with λ-Stirling numbers in λ-shift algebra
  71. Analytical and numerical analysis of damped harmonic oscillator model with nonlocal operators
  72. Compositions of positive integers with 2s and 3s
  73. Kinematic-geometry of a line trajectory and the invariants of the axodes
  74. Hahn Laplace transform and its applications
  75. Discrete complementary exponential and sine integral functions
  76. Special Issue on Recent Methods in Approximation Theory - Part II
  77. On the order of approximation by modified summation-integral-type operators based on two parameters
  78. Bernstein-type operators on elliptic domain and their interpolation properties
  79. A class of strongly convergent subgradient extragradient methods for solving quasimonotone variational inequalities
  80. Special Issue on Recent Advances in Fractional Calculus and Nonlinear Fractional Evaluation Equations - Part II
  81. Application of fractional quantum calculus on coupled hybrid differential systems within the sequential Caputo fractional q-derivatives
  82. On some conformable boundary value problems in the setting of a new generalized conformable fractional derivative
  83. A certain class of fractional difference equations with damping: Oscillatory properties
  84. Weighted Hermite-Hadamard inequalities for r-times differentiable preinvex functions for k-fractional integrals
  85. Special Issue on Recent Advances for Computational and Mathematical Methods in Scientific Problems - Part II
  86. The behavior of hidden bifurcation in 2D scroll via saturated function series controlled by a coefficient harmonic linearization method
  87. Phase portraits of two classes of quadratic differential systems exhibiting as solutions two cubic algebraic curves
  88. Petri net analysis of a queueing inventory system with orbital search by the server
  89. Asymptotic stability of an epidemiological fractional reaction-diffusion model
  90. On the stability of a strongly stabilizing control for degenerate systems in Hilbert spaces
  91. Special Issue on Application of Fractional Calculus: Mathematical Modeling and Control - Part I
  92. New conticrete inequalities of the Hermite-Hadamard-Jensen-Mercer type in terms of generalized conformable fractional operators via majorization
  93. Pell-Lucas polynomials for numerical treatment of the nonlinear fractional-order Duffing equation
  94. Impacts of Brownian motion and fractional derivative on the solutions of the stochastic fractional Davey-Stewartson equations
  95. Some results on fractional Hahn difference boundary value problems
  96. Properties of a subclass of analytic functions defined by Riemann-Liouville fractional integral applied to convolution product of multiplier transformation and Ruscheweyh derivative
  97. Special Issue on Development of Fuzzy Sets and Their Extensions - Part I
  98. The cross-border e-commerce platform selection based on the probabilistic dual hesitant fuzzy generalized dice similarity measures
  99. Comparison of fuzzy and crisp decision matrices: An evaluation on PROBID and sPROBID multi-criteria decision-making methods
  100. Rejection and symmetric difference of bipolar picture fuzzy graph
Downloaded on 25.1.2026 from https://www.degruyterbrill.com/document/doi/10.1515/dema-2022-0202/html
Scroll to top button