Home A Dai-Liao-type projection method for monotone nonlinear equations and signal processing
Article Open Access

A Dai-Liao-type projection method for monotone nonlinear equations and signal processing

  • Abdulkarim Hassan Ibrahim , Poom Kumam EMAIL logo , Auwal Bala Abubakar , Muhammad Sirajo Abdullahi and Hassan Mohammad
Published/Copyright: December 31, 2022
Become an author with De Gruyter Brill

Abstract

In this article, inspired by the projection technique of Solodov and Svaiter, we exploit the simple structure, low memory requirement, and good convergence properties of the mixed conjugate gradient method of Stanimirović et al. [New hybrid conjugate gradient and broyden-fletcher-goldfarbshanno conjugate gradient methods, J. Optim. Theory Appl. 178 (2018), no. 3, 860–884] for unconstrained optimization problems to solve convex constrained monotone nonlinear equations. The proposed method does not require Jacobian information. Under monotonicity and Lipschitz continuity assumptions, the global convergence properties of the proposed method are established. Computational experiments indicate that the proposed method is computationally efficient. Furthermore, the proposed method is applied to solve the 1 -norm regularized problems to decode sparse signals and images in compressive sensing.

MSC 2010: 90C30; 65K05; 90C53; 49M37; 15A18

1 Introduction

In this article, we are concerned with the problem of finding x C such that

(1) G ( x ) = 0 ,

where G is a mapping defined from the n -dimensional Euclidean space R n into itself and C is a closed convex subset of R n .

The convex constrained nonlinear equation (1) arises in various applications such as the first-order necessary condition of the unconstrained convex optimization problems [1], the 1 -norm problem arising from compressive sensing [2], chemical equilibrium systems, optimal power flow equations [3], and financial forecasting problems [4,5]. These motivate researchers to develop efficient methods for finding the solutions to (1). Among the several developed methods, Newton’s methods, quasi-Newton methods, and inexact-Newton methods are classical and popular iterative methods for solving the unconstrained nonlinear equation (1) [6,7]. However, because these methods require solving systems of linear equations using the Jacobian matrix or its approximation at each iteration, they are not suited for handling large-scale problems. As a result, a number of matrix-free approaches for addressing such problems have been developed (see [8,9,10] and references therein).

The conjugate gradient method is a matrix-free iterative method for solving large-scale unconstrained optimization problems [11,12,13, 14,15,16, 17,18,19]. The method has gained popularity due to its simplicity in implementation, low storage requirements, and good convergence properties. Based on these, researchers have made efforts to extend its applicability to finding the solutions to the nonlinear equation (1). For instance, Wang et al. [20] extended the work by Solodov and Svaiter [21] and proposed a projection-type method to solve (1). Furthermore, Ma and Wang [22] presented a modification of the extra- gradient algorithm with a projection for solving constrained nonlinear equations. By popularizing the idea of Zhang and Zhou [23], Yu et al. [24] proposed a constrained version of the spectral gradient projection algorithm for solving nonlinear equations in which computing the sequence of steps does not need matrix storage as well as the solution of linear systems of equations. Interested reader may refer to [25,26,27, 28,29,30, 31,32,33], among others for alternative proposals.

Zheng and Zheng [34] introduced two new Dai-Liao-type conjugate gradient methods for unconstrained optimization problems. Under the strong Wolfe line search condition, the convergence analysis of the proposed method is analyzed for uniformly convex objective functions and general objective functions, respectively. Inspired by the Dai-Liao-type conjugate gradient methods by Zheng and Zheng [34], Stanimirović et al. [35] proposed an efficient modified mixed hybrid Dai-Liao method (MMDL) that combines the two conjugate gradient parameters proposed in [34].

Based on these two conjugate gradient parameters introduced by Stanimirović et al. [35], the main contribution of this article is to propose a Dai-Liao-type projection method for finding the solutions of the nonlinear equation (1). The proposed method can be viewed as an extension of the MMDL conjugate gradient method proposed in [35]. The extension uses the popular projection technique of Solodov and Svaiter [21] to generate a sequence that converges globally under the monotonicity and Lipschitz continuity assumptions on the underlying mapping G . Numerical results given in Section 4 illustrate the efficiency, stability, and competitiveness of the proposed method for solving (1). Furthermore, the method is effectively employed to handle sparse signal and image restoration problems originating in compressive sensing after reformulating the 1 -norm problem as a nonsmooth monotone equation [36].

The remainder of this article is organized as follows. Section 2 presents the motivation and the algorithm. Section 3 addresses the global convergence results of the proposed method. Section 4 presents the numerical experiments for solving nonlinear equations, sparse signal reconstruction, and image restoration. Finally, conclusions are given in Section 5. Unless otherwise stated, throughout this article stands for the Euclidean norm of vectors in R n .

2 Algorithm

We begin this section by recalling the unconstrained optimization problem

min { f ( x ) : x R n } ,

where f : R n R is a nonlinear function whose gradient at point { x k } k 0 is f ( x k ) . Given an initial point x 0 , the MMDL conjugate gradient method by Stanimirović et al. [35] generates a sequence of iterates { x k } k 0 by the following recursive formula:

(2) x k + 1 = x k + α k d k , k 0 ,

where α k > 0 is the step length obtained by a line search procedure and d k is the search direction generated by

(3) d 0 = f ( x 0 ) d k = f ( x k ) + δ k M I f ( x k ) f ( x k ) f ( x k ) 2 d k 1 , k > 0 ,

where

(4) s k = x k + 1 x k , y k = f ( x k ) f ( x k 1 ) and  I  is the identity matrix , δ k π = f ( x k ) 2 f ( x k ) f ( x k 1 ) f ( x k ) f ( x k 1 ) μ f ( x k ) d k 1 + d k 1 y k 1 α k f ( x k ) s k 1 d k 1 y k 1 , μ > 0 , δ k β f ( x k ) 2 f ( x k ) f ( x k 1 ) f ( x k ) f ( x k 1 ) μ f ( x k ) d k 1 d k 1 f ( x k 1 ) α k f ( x k ) s k 1 d k 1 y k 1 , δ k M max { 0 , min { δ k π , δ k β } } .

As shown by Stanimirović et al. [35], f ( x k ) d k = f ( x k ) 2 holds for all k . Note that, the above equality shows that { d k } k 0 is a descent direction of f at point { x k } k 0 , which is a crucial property for the iterative method to converge globally. Based on the MMDL method (2)–(4), in what follows, we introduce our method for solving the nonlinear equation (1). The method is a projection-based approach that generates the trial point { w k } k 0 using the following formula:

(5) w k = x k + α k d k .

Inspired by (3), d k in (5) is computed as follows:

(6) d 0 = G ( x 0 ) , d k = G ( x k ) + δ k I G ( x k ) G ( x k ) G ( x k ) 2 s k 1 , k > 0 ,

where s k = w k x k = α k d k and other parameters are defined as follows:

(7) y k 1 = G ( x k ) G ( x k 1 ) + r s k 1 , r > 0 .

(8) t k 1 = 1 + max 0 , d k 1 y k 1 d k 1 d k 1 , z k 1 = y k 1 + t k 1 d k 1 .

(9) δ k ( 1 ) = G ( x k ) 2 G ( x k ) G ( x k 1 ) G ( x k ) G ( x k 1 ) μ G ( x k ) d k 1 + d k 1 z k 1 α k G ( x k ) s k 1 d k 1 z k 1 , δ k ( 2 ) = G ( x k ) 2 G ( x k ) G ( x k 1 ) G ( x k ) G ( x k 1 ) μ G ( x k ) d k 1 d k 1 G ( x k 1 ) α k G ( x k ) s k 1 d k 1 z k 1 , δ k = max { 0 , min { δ k ( 1 ) , δ k ( 2 ) } } .

Note the definition of δ k ( 1 ) and δ k ( 2 ) in (9) is similar to the definition of δ k π and δ k β in (4). However, there is a slight modification in order to guarantee the boundedness of the proposed derivative-free search direction (6). Next, we describe the algorithm for the proposed method. But first, we recall the projection map denoted as P C , which is a mapping from R n onto the nonempty convex set C , that is,

P C ( x ) arg min { x y : y C } ,

which has the well-known nonexpansive property

(10) P C ( x ) P C ( y ) x y , x , y R n .

Algorithm 2.1

Initialization. Choose any random point x 0 C , the positive constants: ρ ( 0 , 1 ) , ε ( 0 , 1 ) , κ ( 0 , 1 ] , r > 0 γ ( 0 , 1 ) , ϱ ( 0 , 2 ) . Set k = 0 .

Step 1. Compute G ( x k ) . If G ( x k ) ε , stop. Otherwise, compute the search direction d k by (6).

Step 2. Determine the step-size α k = κ ρ i where i is the least nonnegative integer such that

(11) G ( x k + α k d k ) d k γ α k d k 2

and compute the trial point w k = x k + α k d k .

Step 3. If w k C and G ( w k ) ε , stop and set x k + 1 = w k . Otherwise, compute the next iterate by

(12) x k + 1 = P C [ x k ϱ φ k G ( w k ) ] ,

where

φ k = G ( w k ) ( x k w k ) G ( w k ) 2 .

Step 4. Set k = k + 1 and go to step 1.

3 Convergence analysis

To establish the global convergence of Algorithm 2.1, we need the following assumptions and lemmas.

  1. The solution set denoted by C of the considered problem is nonempty.

  2. The mapping G is monotone on C . That is, for all x , y C

    ( G ( x ) G ( y ) ) ( x y ) 0 .

  3. The mapping G is Lipschitz continuous on C . That is, for all x , y C , there exists a constant L > 0 such that

    G ( x ) G ( y ) L x y .

In what follows, G ( x k ) is denoted as G k for the sake of simplicity.

Lemma 3.1

The search direction d k generated by (6) satisfies the sufficient descent condition. That is,

(13) G k d k = G k 2 , k 0 .

Proof

For k = 0 , multiplying both sides of (6) by G 0 , we have

G 0 d 0 = G 0 2 .

Also for k > 0 , multiplying both sides of (6) by G k , we obtain

G k d k = G k 2 + δ k G k s k 1 δ k G k 2 G k s k 1 G k 2 = G k 2 + δ k G k s k 1 δ k G k s k 1 = G k 2 .

Lemma 3.2

The line search condition (11) is well-defined. That is, for all k 0 , there exists a non-negative integer i satisfying (11).

Proof

Assume that there exists k 0 0 such that (11) is not satisfied for any nonnegative integer i , that is,

G ( x k 0 + κ ρ i d k 0 ) d k 0 < γ κ ρ i d k 0 2 , i 1 .

By the continuity of the mapping G and letting i results in

G k 0 d k 0 0 ,

which contradicts (13). This completes the proof.□

Lemma 3.3

Let { w k } k 0 and { x k } k 0 be generated by Algorithm 2.1. Assume that (A1)–(A3) hold, then

α k > min κ , ρ ( L + γ ) G k 2 d k 2 .

Proof

Observe from (11) that, if α k κ , then α ¯ k = ρ 1 α k does not satisfy (11), that is,

(14) G ( x k + ρ 1 α k d k ) d k < γ ρ 1 α k d k 2 .

Inequality (14) combined with (13) and Lipschitz continuity of G implies

(15) G k 2 = G k d k = ( G ( x k + ρ 1 α k d k ) G k ) d k G ( x k + ρ 1 α k d k ) d k < ρ 1 α k L d k 2 + ρ 1 α k γ d k 2 = ρ 1 α k ( L + γ ) d k 2 .

Thus, from (15) we have,

(16) α k > min κ , ρ ( L + γ ) G k 2 d k 2 .

This proves Lemma 3.3.□

Lemma 3.4

Let the sequence { x k } k 0 be generated by Algorithm 2.1, then there exists ϖ > 0 such that

(17) G k ϖ .

Proof

By (A2), we have

(18) G ( w k ) ( x k x ) = G ( w k ) ( x k + w k w k x ) = G ( w k ) ( w k x ) + G ( w k ) ( x k w k ) G ( x ) ( w k x ) + G ( w k ) ( x k w k ) = G ( w k ) ( x k w k ) γ α k 2 d k 2 = γ x k w k 2 .

Recall from the nonexpansive property of the projection operator, it holds that for any x C ,

(19) x k + 1 x 2 = P C [ x k ϱ φ k G ( w k ) ] x 2 x k ϱ φ k G ( w k ) x 2 = x k x 2 2 ϱ φ k G ( w k ) ( x k x ) + ϱ 2 φ k 2 G ( w k ) 2 = x k x 2 2 ϱ G ( w k ) ( x k w k ) G ( w k ) 2 G ( w k ) ( x k x ) + ϱ 2 G ( w k ) ( x k w k ) G ( w k ) 2 x k x 2 2 ϱ G ( w k ) ( x k w k ) G ( w k ) 2 G ( w k ) ( x k w k ) + ϱ 2 G ( w k ) ( x k w k ) G ( w k ) 2 = x k x 2 ϱ ( 2 ϱ ) G ( w k ) ( x k w k ) G ( w k ) 2

(20) x k x 2 .

By inequality (20), we know that { x k x } k 0 is a decreasing sequence. Therefore, { x k } k 0 is bounded. Moreover,

x k + 1 x 2 x k x 2 x k 1 x 2 x 0 x 2 .

By (A3), we have

G k = G k G ( x ) L x k x L x 0 x = ϖ .

Hence, (17) holds.□

Lemma 3.5

Let { w k } k 0 and { x k } k 0 be generated Algorithm 2.1. Then,

  1. { w k } k 0 is bounded,

  2. lim k x k w k = 0 ,

  3. lim k x k x k + 1 = 0 .

Proof

  1. Since { x k } k 0 is bounded, from (18) we have

    (21) G ( w k ) T ( x k w k ) γ x k w k 2 .

    Utilizing (17), we have

    G ( w k ) ( x k w k ) = ( G ( w k ) G k ) ( x k w k ) + G k ( x k w k ) G k x k w k ϖ x k w k .

    Combining the above with (21), it is easy to deduce that

    x k w k ϖ γ ,

    which implies

    w k ϖ γ + x k .

    Hence, { w k } k 0 is bounded due to the boundedness of { x k } k 0 .

  2. From (19), we have

    x k + 1 x 2 x k x 2 ϱ ( 2 ϱ ) [ G ( w k ) ( x k w k ) ] 2 G ( w k ) 2 x k x 2 ϱ ( 2 ϱ ) γ 2 x k w k 4 G ( w k ) 2 ,

    which means

    ϱ ( 2 ϱ ) x k w k 4 G ( w k ) 2 γ 2 ( x k x 2 x k + 1 x 2 ) .

    By (A3) and the bounded of { w k } k 0 , the sequence { G ( w k ) } k 0 is bounded. Thus, there exists a positive ϖ 1 > 0 such that G ( w k ) ϖ 1 and furthermore

    ϱ ( 2 ϱ ) k = 0 x k w k 4 ϖ 1 2 γ 2 k = 0 ( x k x 2 x k + 1 x 2 ) ϖ 1 2 γ 2 x 0 x 2 < .

    Hence,

    (22) lim k α k d k = lim k x k w k = 0 .

  3. From (10), we have

    x k x k + 1 = x k P C [ x k ϱ φ k G ( w k ) ] x k ( x k ϱ φ k G ( w k ) ) = ϱ φ k G ( w k ) ϱ x k w k .

    Therefore, we obtain lim k x k x k + 1 = 0 .□

The following theorem establishes the global convergence of Algorithm 2.1.

Theorem 3.6

Suppose that (A1)–(A3) hold. If { x k } k 0 is the sequence generated by Algorithm 2.1, then

(23) liminf k G k = 0 .

Furthermore, { x k } k 0 converges to a solution of (1).

Proof

Suppose (23) does not hold. Meaning, there exists a constant ε 0 > 0 such that

(24) G k ε 0 , k .

By (13), we know

G k d k G k d k = G k 2 ,

which implies

(25) d k G k ε 0 , k .

Having in view from the definition of y k 1 and z k 1 in (8), we have

(26) d k 1 z k 1 d k 1 y k 1 + d k 1 2 d k 1 y k 1 = d k 1 2 > 0 .

Next, we show that d k is bounded. To show this, we have the following cases:

  1. if δ k = 0 , it is easy to see that

    d k = G k + δ k I G k G k G k 2 s k 1 ϖ .

  2. if δ k = min { δ k ( 1 ) , δ k ( 2 ) } , then we have the following subcases:

    • if min { δ k ( 1 ) , δ k ( 2 ) } = δ k ( 1 ) , then

      (27) d k G k + 2 δ k ( 1 ) s k 1 .

      From the definition of δ k ( 1 ) , (26), and the fact that for all k , α k ( 0 , 1 ) , we have that,

      (28) δ k ( 1 ) G k 2 + G k G k 1 G k G k 1 μ G k d k 1 + d k 1 z k 1 + α k G k s k 1 d k 1 z k 1 2 G k 2 d k 1 2 + α k G k d k 1 2 α k 1 d k 1 2 G k 2 d k 1 2 + G k d k 1 .

      Relating (27) with (28) together with (25) and the fact that α k ( 0 , 1 ) , k , we have

      d k G k + 2 2 G k 2 d k 1 2 + G k d k 1 s k 1 = G k + 2 2 G k 2 d k 1 2 α k 1 d k 1 + G k d k 1 α k 1 d k 1 G k + 2 2 G k 2 d k 1 + G k ϖ + 2 2 ϖ 2 ε 0 + ϖ b 1 .

    • Next, we show { d k } is bounded if min { δ k ( 1 ) , δ k ( 2 ) } = δ k ( 2 ) .

      δ k ( 2 ) G k 2 + G k G k 1 G k G k 1 μ G k d k 1 d k 1 G k 1 + α k G k s k 1 d k 1 z k 1 2 G k 2 d k 1 G k 1 + G k s k 1 d k 1 2 = 2 G k 2 G k 1 2 + α k 1 G k d k 1 .

      From (26), (27) and using the fact that α k ( 0 , 1 ) , k , it holds that

      d k G k + 2 δ k ( 2 ) s k 1 = G k + 2 2 G k 2 G k 1 2 + α k 1 G k d k 1 s k 1 = G k + 2 2 G k 2 G k 1 2 α k 1 d k 1 + G k d k 1 α k 1 2 d k 1 3 G k + 4 G k 2 G k 1 2 α k 1 d k 1 3 ϖ + 4 ϖ 2 ε 0 2 α k 1 d k 1 ,

      for all k N . Equation (22) implies that for every ε 1 > 0 , there exists an index k 0 N such that α k 1 d k 1 < ε 1 k > k 0 . If we choose ε 1 = ε 0 2 and b 2 = max { d 0 , d 1 , , d k 0 , b 2 } , where b 2 = ϖ ( 3 + 4 ϖ ) , it holds d k b 2 for every k > k 0 N .

Let υ = max { ϖ , b 1 , b 2 } , then for every n > n 0 N ,

(29) d k υ .

Integrating with (16), (24), (25), and (29), we know that for any n sufficiently large

α k d k > min κ , ρ ( L + γ ) G k 2 d k 2 d k = min κ d k , ρ ( L + γ ) G k 2 d k > min κ ε 0 , ρ ε 0 2 ( L + γ ) υ > 0 .

The last inequality yields a contradiction with (22). Consequently, (23) holds.

Since G is continuous and (23) holds, then the sequence { x k } k 0 has some accumulation point say x for which G ( x ) = 0 , that is, x is a solution of (1). From (20), it holds that { x k x } k 0 converges, and since x is an accumulation point of { x k } k 0 , we have that the sequence { x k } k 0 converges to x .□

4 Numerical experiments

In this section, numerical experiments are carried out to show the effectiveness and robustness of the proposed method. All experiments have been performed on a CPU HP-Pavilion 14 with 1 TB of hard drive and 8 GB RAM, using the windows 10 operating system. All algorithms were coded in Matlab R2019b. The numerical performance of the method is tested on monotone equations, signal and image recovery problems. Throughout this section, Algorithm 2.1 is referred to as Dai-Liao Projection Algorithm (DLPA).

4.1 Test on monotone equations

We discuss numerical test results for DLPA on monotone nonlinear equations and compare it with similar algorithms designed to solve monotone nonlinear equations. The considered methods for comparison are; the conjugate gradient method for solving convex constrained monotone equations (CGD) [37], the projection method for convex constrained monotone nonlinear equations (PCG) [38], and the derivative-free iterative method for nonlinear monotone equations (PDY) [39]. We implement DLPA using the following parameters: κ = 1 , ρ = 0.6 , ϱ = 1.8 , γ = 1 0 4 , μ = 1.2 , and r = 1 0 3 . The parameters for CGD, PCG, and PDY are set as reported in the numerical section of their respective articles. The iterations are terminated using the stopping rule of either maximum number of iterations, which is set to 1,000, or when

G k 1 0 6 .

We list the test problems utilized for this experiment in the appendix section (Appendix A). Note that, the mapping G is taken as G ( x ) = ( g 1 ( x ) , , g n ( x ) ) .

The algorithms are tested using seven different initial points, one of which is randomly generated in R n . We ran the algorithms for several dimensions ranging from n = 1,000 to 100,000. The results obtained by executing the various algorithms to solve the test problems are reported in Tables A1A9.

In order to have a clear visualization of the efficiency of the algorithms, we employ the performance profile of Dolan and Moré [40]. This profile can be considered as a tool for evaluating and comparing the performances of iterative methods, where the profile of each method is measured based on the ratio of its computational outcome versus the computational outcome of the best-presented method.

By using the Dolan and Moré performance profiling tool, we obtain the Figures 1, 2, and 3. Figure 1 illustrates the performance profile of DLPA, CGD, PCG, and PDY methods based on the number of iterations. It can be seen that the performance of all considered methods is competitive. However, DLPA outperforms the other three methods. DLPA algorithm was able to solve about 60 % of the test problems with least number of iterations. Also considering the function evaluations and CPU running time, DLPA obtained better results compared to the other methods. The good numerical performance of this method could be attributed to the choice of the search direction. Their performances are illustrated in Figures 2 and 3, respectively.

Figure 1 
                  Performance profile of DLPA, CGD, PCG, and PDY methods in terms of number of iterations.
Figure 1

Performance profile of DLPA, CGD, PCG, and PDY methods in terms of number of iterations.

Figure 2 
                  Performance profile of DLPA, CGD, PCG, and PDY methods in terms of number of function evaluations.
Figure 2

Performance profile of DLPA, CGD, PCG, and PDY methods in terms of number of function evaluations.

Figure 3 
                  Performance profile of DLPA, CGD, PCG, and PDY methods in terms of CPU time.
Figure 3

Performance profile of DLPA, CGD, PCG, and PDY methods in terms of CPU time.

4.2 Compressive sensing

In many practical problems emerging in science and technology, one encounters the task of inferring quantities of interest from measured information. For instance, in signal and image processing, one would like to reconstruct a signal from measured data. When the information acquisition process is linear, the problem reduces to solving a linear system of equations. To state the problem, precisely, the observed data b R k are connected to the original signal x R n of interest via the equation

(30) b = A x ,

where A R k × n ( k < n ) is a linear map. As prior information, we assume that x itself is sparse, that is, it has very few nonzero coefficients. In order to reconstruct the original signal x , it is necessary to solve the linear system (30). Since k < n , if the linear system (30) has at least one solution, then it will have infinitely many solutions. That is, the system is under-determined.

Our interest here, is finding the sparse solutions to an under-determined linear system of the form (30) arising from compressive sensing. To regain the sparse signal x from the linear system (30), one may consider the problem of finding the sparsest signal that occurs in all solutions of the linear system (30). That is, finding the solution of the k 0 -regularized problem

(31) min x { x 0 : A x = b } ,

where x 0 denotes the nonzero components in x . Nonetheless, since implementing the 0 -norm is not easy, some researchers have developed an alternative model by replacing the 0 -norm with 1 -norm and solved the following problem:

(32) min x { x 1 : A x = b } .

Under some mild assumptions, Donoho [41] proved that the solution(s) of problem (31) also solves (32). The observed value b usually contains some noise in most applications, thus problem (32) can be relaxed to the penalized least squares problem

(33) min x τ x 1 + 1 2 A x b 2 2 ,

where τ is a positive parameter, balancing the tradeoff between sparsity and residual error.

Problems of the form (33) have become familiar over the past three decades, particularly in compressive sensing context. Compressed sensing is a novel method of signal processing, which was introduced in 2006, and since then has already become a key concept in various areas of applied mathematics, computer science, and electrical engineering. Interested readers may refer to the following references ([42] and [43]) for more details.

To solve the model (33), several iterative algorithms have been developed. For instance, the fast iterative shrinkage thresholding algorithm (FISTA) and iterative shrinkage thresholding (IST). These iterative algorithms are well known due to their simplicity and efficiency [44,45]. Furthermore, the gradient descent methods are also effective methods developed for solving the image reconstruction problems. In order to solve (33) in compressive sensing using the gradient projection method, Figueiredo et al. [36] reformulated (33) into a convex quadratic form. In what follows, we give a short overview of the reformulation of (33) into a convex quadratic program problem by Xiao et al. [46].

Consider any vector x R n , x can be rewritten as follows:

x = u v , u 0 , v 0 ,

where u R n , v R n , and u i = ( x i ) + , v i = ( x i ) + for all i [ 1 , n ] with ( ) + = max { 0 , } . Therefore, the 1 -norm could be represented as x 1 = e k u + e k v , where e k is an n -dimensional vector with all element one. Thus, (33) was rewritten as

(34) min u , v 1 2 b A ( u v ) 2 + τ e k u + τ e k v : u , v 0 .

Moreover, from [36], with no difficulty, (34) can be rewritten as the quadratic program problem with box constraints. That is,

(35) min z 1 2 z T H z + c T z , z 0 ,

where z = u v , c = τ e 2 n + y y , y = A T b , and H = A A A A A A A A .

Obviously, H is a positive semi-definite matrix, which indicates that problem (35) is a convex quadratic programming problem. Quite recently, equation (35) was translated into a linearly variational inequality problem by Xiao and Zhu [37], which is equivalent to a linear complementary problem. It was noted that, z is a solution of (35) if and only if it is a solution of the following nonlinear system of equation:

(36) G ( z ) = min { z , H z + c z C } = 0 ,

where C = R + 2 n . Hence, solving problem (36) is equivalent to solving problem (33) and therefore, we can make use of our algorithm to solve (36).

4.3 Recovery of sparse signals

In this subsection, our main focus is utilizing DLPA in the recovery of one-dimensional sparse signal from its limited measurement with additive noise. Similar to [37,47], the quality of recovery is measured by using their mean squared error (MSE) defined as

(37) MSE = 1 n x x ¯ 2 ,

where x is the original signal and x ¯ is the recovered signal.

Due to the capacity restrictions of the PC, we select a small size signal with signal length of 2,048 and sampling measurement of 512. The original signal x contains 64 randomly nonzero elements. Furthermore, during the experiment, a random Gaussian matrix A using the Matlab command r a n d n ( k , n ) is generated. In the test, the measurement b is computed by

b = A x + δ ,

where δ is the Gaussian noise distributed as N ( 0 , 1 0 4 ) . The parameters for DLPA are as follows: κ = 10 , ρ = 0.9 , γ = 1 0 4 .

To evaluate the performance of DLPA, we test it against similar algorithms, which were specially designed to solve monotone nonlinear equations with convex constraints and reconstructing sparse signals in compressive sensing. These algorithms include SGCS [46], CGD [37], and PCG [38]. For fairness in comparing the algorithms, iteration process of all algorithms started at x 0 = A b and terminated when

Tol = f k f k 1 f k 1 < 1 0 4 ,

where f ( x ) = 1 2 A x b 2 2 + τ x 1 is the objective function and f k denotes the function value at x k .

Comparing the four algorithms in Figure 4, it is not difficult to see that the original signal was recovered by the four algorithms. However, DLPA won in decoding sparse signal in compressive sensing. This is reflected by its fewer number of iterations and fewer computing time. To further illustrate the efficiency of DLPA, we repeated the experiment on ten different noise samples. Each time the experiment is run, DLPA proves to be more efficient than the SGCS, CGD, and PCG in terms of iteration numbers and CPU time. The summary is given in Table 1.

Figure 4 
                  Reconstruction of sparse signal. From the top to the bottom is the original signal (first plot), the measurement (second plot), and the reconstructed signals by DLPA (third plot), SGCS (Fourth plot), CGD (fifth plot), and PCG (sixth plot).
Figure 4

Reconstruction of sparse signal. From the top to the bottom is the original signal (first plot), the measurement (second plot), and the reconstructed signals by DLPA (third plot), SGCS (Fourth plot), CGD (fifth plot), and PCG (sixth plot).

Table 1

Experimental results of sparse signal decoding in compressed sensing problem via DLPA, SGCS, CGD, and PCG algorithms

DLPA SGCS CGD PCG
Time (s) Niter MSE Time (s) Niter MSE Time(s) Niter MSE Time(s) Niter MSE
1.95 78 3.49 × 1 0 6 3.08 131 4.74 × 1 0 6 4.08 152 3.91 × 1 0 6 3.27 111 3.75 × 1 0 6
2.36 86 3.29 × 1 0 6 3.39 125 4.05 × 1 0 6 3.75 129 3.48 × 1 0 6 2.75 98 5.76 × 1 0 6
2.95 94 7.58 × 1 0 6 3.64 129 9.43 × 1 0 6 3.11 113 2.03 × 1 0 5 2.38 81 7.60 × 1 0 6
2.83 92 2.92 × 1 0 6 3.36 120 3.95 × 1 0 6 3.03 106 3.22 × 1 0 6 2.88 103 3.20 × 1 0 6
3.05 84 2.96 × 1 0 6 4.14 126 3.84 × 1 0 6 3.08 101 5.93 × 1 0 6 3.55 105 2.97 × 1 0 6
4.02 127 3.71 × 1 0 6 3.58 131 4.81 × 1 0 6 2.77 102 1.56 × 1 0 5 3.19 116 3.75 × 1 0 6
3.3 101 2.18 × 1 0 6 3.56 126 2.94 × 1 0 6 3.86 126 3.10 × 1 0 6 2.67 103 4.30 × 1 0 6
2.83 94 6.43 × 1 0 6 2.95 120 7.61 × 1 0 6 2.58 99 6.51 × 1 0 6 4.31 98 6.28 × 1 0 6
2.77 90 2.63 × 1 0 6 3.72 131 3.33 × 1 0 6 3.36 113 2.75 × 1 0 6 3.45 119 2.67 × 1 0 6
3.5 94 2.93 × 1 0 6 3.52 123 3.79 × 1 0 6 3.17 111 3.25 × 1 0 6 3 108 3.15 × 1 0 6
Average 2.956 94 3.81 × 1 0 6 3.494 126.2 4.85 × 1 0 6 3.279 115.2 6.81 × 1 0 6 3.145 104.2 4.34 × 1 0 6

Figure 4 shows the plot of the numerical results consisting of the original sparse signal, the measurement, and the reconstructed signal by each algorithm. Moreover, in Figure 5, we give a visual illustration of the performance of each method relative to their convergence behavior from the view of merit function values and relative error as the iteration number and computing time increases.

Figure 5 
                  Comparison results of DLPA, SGCS, CGD, and PCG algorithms. From left to right: the changed trend of MSE goes along with the number of iterations or CPU time in seconds, and the changed trend of the objective function values accompany the number of iterations or CPU time in seconds.
Figure 5

Comparison results of DLPA, SGCS, CGD, and PCG algorithms. From left to right: the changed trend of MSE goes along with the number of iterations or CPU time in seconds, and the changed trend of the objective function values accompany the number of iterations or CPU time in seconds.

4.4 Image restoration problem

Here, we illustrate the performance of DLPA in image restoration. The aim of this experiment is to restore a two-dimensional image from its limited measurements. In this experiment, a matrix A whose k rows are randomly selected from the d × d DWT matrix. This type of matrix A requires no storage and helps in speeding up the matrix-vector products involving A and A . The chosen parameters for DLPA are κ = 0.1 , ρ = 0.05 , γ = 1 0 4 . In this test, we use the colored classical test images (Lena, Barbara, Tiffany, Girl, and Mars), which were degraded using Gaussian blur and Gaussian noise of 10 and 20% (Figure 6).

Figure 6 
                  Restoration of test images, 
                        
                           
                           
                              512
                              ×
                              512
                           
                           512\times 512
                        
                      Lenna (Top) and 
                        
                           
                           
                              720
                              ×
                              576
                           
                           720\times 576
                        
                      Tiffany (Bottom). From the left: The original image, the blurred image with 10% noise, the restored image by CGD and DLPA (right).
Figure 6

Restoration of test images, 512 × 512 Lenna (Top) and 720 × 576 Tiffany (Bottom). From the left: The original image, the blurred image with 10% noise, the restored image by CGD and DLPA (right).

The classical test images were obtained from the website http://hlevkin.com/06testimages.htm. We compare the performance of DLPA against similar algorithm, which was specially designed for image restoration (CGD [37]). For fairness in comparing the algorithms, iterative process of all algorithms starts at x 0 = A b and terminates when Tol < 1 0 5 . The quality of image restoration is evaluated in terms of

  • Signal-to-ratio (SNR, unit: dB).

  • Peak signal to noise ratio (PSNR, unit: dB), ref. [48]).

  • Structural similarity index (SSIM index [49]). The MATLAB implementation of the SSIM index can be obtained at http://www.cns.nyu.edu/lcv/ssim/.

Taking a visual observation of the images in Figure 7, it can be seen that the quality of the restored images by both algorithms is similar. However, DLPA provides a valid approach to solve image restoration problems as its performance is competitive and outperforms the CGD method. This is reflected with larger SNR, PSNR, and SSIM, which indicate that the restored images from the blurred and noisy images by DLPA are much more closer to the original one than the restored ones by CGD. The SNR, PSNR, and SSIM comparisons for three-color test images are reported in Table 2.

Figure 7 
                  Restoration of test images, 
                        
                           
                           
                              720
                              ×
                              576
                           
                           720\times 576
                        
                      Barbara (Top), followed by 
                        
                           
                           
                              720
                              ×
                              576
                           
                           720\times 576
                        
                      Girl, 
                        
                           
                           
                              
                                 
                                 1,280
                              
                              ×
                              
                                 1,024
                                 
                              
                           
                           \hspace{0.1em}\text{1,280}\times \text{1,024}\hspace{0.1em}
                        
                      Mars. From the left: The original image, the blurred and noisy image with 10% noise, the restored image by CGD and DLPA (right).
Figure 7

Restoration of test images, 720 × 576 Barbara (Top), followed by 720 × 576 Girl, 1,280 × 1,024 Mars. From the left: The original image, the blurred and noisy image with 10% noise, the restored image by CGD and DLPA (right).

Table 2

Parameter value of the restored color images

Blur and Noise DLPA CGD
Image ObjFun MSE SNR PSNR SSIM ObjFun MSE SNR PSNR SSIM
10% Tiffany 8.44 × 1 0 3 1.16 × 1 0 3 21 22.84 0.9153 8.44 × 1 0 3 1.17 × 1 0 3 20.93 22.76 0.9127
Girl 1.03 × 1 0 4 2.22 × 1 0 3 17.32 22.38 0.7297 1.03 × 1 0 4 2.28 × 1 0 3 17.2 22.26 0.7228
Mars 1.41 × 1 0 4 4.50 × 1 0 3 14.71 24.61 0.789 1.41 × 1 0 4 4.53 × 1 0 3 14.67 24.57 0.7881
Lenna 5.66 × 1 0 3 1.37 × 1 0 3 16.83 22.17 0.9144 5.66 × 1 0 3 1.40 × 1 0 3 16.7 22.03 0.9115
Barbara 8.49 × 1 0 3 3.96 × 1 0 3 13.71 20.13 0.6324 8.49 × 1 0 3 4.06 × 1 0 3 13.61 20.03 0.6263
20% Tiffany 1.23 × 1 0 4 1.37 × 1 0 3 20.4 22.24 0.8876 1.23 × 1 0 4 1.42 × 1 0 3 20.24 22.08 0.8811
Girl 1.65 × 1 0 4 2.50 × 1 0 3 16.83 21.89 0.6712 1.65 × 1 0 4 2.60 × 1 0 3 16.64 21.7 0.6579
Mars 3.36 × 1 0 4 5.04 × 1 0 3 14.24 24.14 0.7739 3.36 × 1 0 4 5.13 × 1 0 3 14.16 24.05 0.7717
Lenna 9.53 × 1 0 3 1.57 × 1 0 3 16.39 21.73 0.9015 9.53 × 1 0 3 1.63 × 1 0 3 16.18 21.51 0.8961
Barbara 1.46 × 1 0 4 4.25 × 1 0 3 13.42 19.84 0.606 1.46 × 1 0 4 4.40 × 1 0 3 13.27 19.69 0.5961

5 Conclusion

In this article, a conjugate gradient method for solving nonlinear equations with convex constraints is proposed. Under some appropriate conditions, the global convergence of the method is established. The numerical experiment indicates that the proposed algorithm is practical, effective, and outperforms the CGD, PCG, and PDY for some given convex constraint benchmark test problems with dimension ranging from n = 1,000 to 100,000 using different initial points. Furthermore, one major contribution of this article is the utilization of the proposed algorithm in solving the k 1 -norm regularized problem in compressive sensing. Computational results from reconstructing sparse signals and images have shown that the proposed method is effective.

Acknowledgments

The authors are grateful to the anonymous referees and editor for their useful comments, which have made the article clearer and more comprehensive than the earlier version. The authors acknowledge (i) the financial support provided by the Centre of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT, and (ii) the financial support provided by “Mid-Career Research Grant” (N41A640089). In addition, Abdulkarim Hassan Ibrahim was supported by King Mongkut’s University of Technology Thonburi’s Postdoctoral Fellowship. Also, Abdulkarim Hassan Ibrahim and Auwal Bala Abubakar acknowledge with thanks, the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University.

  1. Conflict of interest: Authors state no conflict of interest.

Appendix A

Problem 1

This problem is the Exponential function [8,52] with constraint set S = R + n , that is,

g 1 ( x ) = e x 1 1 , g i ( x ) = e x i + x i 1 , for i = 2 , 3 , , n .

Problem 2

Modified logarithmic function [8] with constraint set S = x R n : i = 1 n x i n , x i > 1 , i = 1 , 2 , , n , that is,

g i ( x ) = ln ( x i + 1 ) x i n , i = 1 , 2 , 3 , , n .

Problem 3

The function g i ( x ) [53] with S = R + n defined by,

g i ( x ) = min ( min ( x i , x i 2 ) , max ( x i , x i 3 ) ) for i = 1 , 2 , 3 , , n .

Problem 4

Strictly convex function I [20], with constraint set S = R + n , that is,

g i ( x ) = e x i 1 , i = 1 , 2 , 3 , , n .

Problem 5

Strictly convex function II [20], with constraint set S = R + n , that is,

g i ( x ) = i n e x i 1 , i = 1 , 2 , 3 , , n .

Problem 6

Tridiagonal exponential function [50] with constraint set S = R + n , that is,

g 1 ( x ) = x 1 e cos ( h ( x 1 + x 2 ) ) , g i ( x ) = x i e cos ( h ( x i 1 + x i + x i + 1 ) ) , for 2 i n 1 , g n ( x ) = x n e cos ( h ( x n 1 + x n ) ) , where h = 1 n + 1 .

Problem 7

Nonsmooth function [54] with with constraint set S = { x R n : i = 1 n x i n , x i 1 , 1 i n } .

g i ( x ) = x i sin x i 1 , i = 1 , 2 , 3 , , n .

Problem 8

The Trig exp function [8] with constraint set S = R + n , that is,

g 1 ( x ) = 3 x 1 3 + 2 x 2 5 + sin ( x 1 x 2 ) sin ( x 1 + x 2 ) g i ( x ) = 3 x i 3 + 2 x i + 1 5 + sin ( x i x i + 1 ) sin ( x i + x i + 1 ) + 4 x i x i 1 e x i 1 x i 3 for i = 2 , 3 , , n 1 g n ( x ) = x n 1 e x n 1 x n 4 x n 3 , where h = 1 n + 1 .

Problem 9

The Penalty I function [51] with S = R + n , that is,

t i = i = 1 n x i 2 , c = 1 0 5 g i ( x ) = 2 c ( x i 1 ) + 4 ( t i 0.25 ) x i , i = 1 , 2 , 3 , , n .

Appendix B

Throughout this section, “DIM” denotes the dimension, “INP” denotes initial point, “NITER” denotes number of iterations, “NFE” denotes number of function evaluations, “CPU” denotes the CPU running time and “NM” denotes the norm of a function at the approximate solution. The associated initial points used for these experiments are

x 1 = ( 0.1 , 0.1 , , 0.1 ) , x 2 = ( 0.2 , 0.2 , , 0.2 ) , x 3 = ( 0.5 , 0.5 , , 0.5 ) , x 4 = ( 1.2 , 1.2 , , 1.2 ) , x 5 = ( 1.5 , 1.5 , 1.5 ) , x 6 = ( 2 , 2 , , 2 ) , x 7 = r a n d ( n , 1 ) .

Table A1

Results of the four algorithms on Problem 1

DLPA CGD PCG PDY
DIM INP NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM
1,000 x 1 2 7 0.17013 0 42 125 0.032827 9.97 × 1 0 7 18 71 0.023939 5.72 × 1 0 6 16 64 0.025621 3.45 × 1 0 7
x 2 2 7 0.016827 0 45 134 0.021328 9.45 × 1 0 7 18 71 0.020052 9.82 × 1 0 6 16 64 0.020328 7.03 × 1 0 7
x 2 2 7 0.021985 0 48 143 0.018669 9.82 × 1 0 7 19 75 0.015789 7.10 × 1 0 6 17 68 0.017887 6.22 × 1 0 7
x 4 2 7 0.007799 0 50 149 0.049726 9.70 × 1 0 7 18 71 0.028495 8.27 × 1 0 6 18 72 0.017678 4.54 × 1 0 7
x 5 2 7 0.02276 0 51 152 0.043668 8.17 × 1 0 7 63 251 0.051309 9.58 × 1 0 6 18 72 0.037142 3.65 × 1 0 7
x 6 2 7 0.005613 0 51 152 0.017662 8.56 × 1 0 7 61 243 0.051305 9.15 × 1 0 6 18 72 0.013545 3.80 × 1 0 7
x 7 22 88 0.042571 9.17 × 1 0 7 45 134 0.020863 9.53 × 1 0 7 18 71 0.008538 9.36 × 1 0 6 17 68 0.027463 7.26 × 1 0 7
5,000 x 1 2 7 0.027713 0 41 122 0.088477 8.34 × 1 0 7 18 71 0.039251 7.42 × 1 0 6 16 64 0.081137 7.61 × 1 0 7
x 2 2 7 0.024809 0 43 128 0.12323 9.81 × 1 0 7 19 75 0.03493 6.53 × 1 0 6 17 68 0.052426 5.15 × 1 0 7
x 2 2 7 0.018235 0 47 140 0.06029 8.05 × 1 0 7 20 79 0.067429 5.20 × 1 0 6 18 72 0.067676 4.63 × 1 0 7
x 4 2 7 0.019507 0 48 143 0.061024 9.93 × 1 0 7 19 75 0.037607 8.10 × 1 0 6 19 76 0.056304 3.38 × 1 0 7
x 5 2 7 0.01573 0 49 146 0.10768 8.36 × 1 0 7 62 247 0.093514 9.53 × 1 0 6 18 72 0.056564 8.12 × 1 0 7
x 6 2 7 0.023182 0 49 146 0.21444 8.76 × 1 0 7 60 239 0.075961 9.10 × 1 0 6 18 72 0.2044 8.10 × 1 0 7
x 7 23 92 0.13021 8.61 × 1 0 7 48 143 0.059287 9.71 × 1 0 7 19 75 0.036299 9.21 × 1 0 6 18 72 0.05685 5.32 × 1 0 7
10,000 x 1 2 7 0.020424 0 40 119 0.084429 8.97 × 1 0 7 18 71 0.050292 9.50 × 1 0 6 17 68 0.10777 3.55 × 1 0 7
x 2 2 7 0.020986 0 43 128 0.087432 8.26 × 1 0 7 19 75 0.058478 8.15 × 1 0 6 17 68 0.11432 7.27 × 1 0 7
x 2 2 7 0.021304 0 46 137 0.1197 8.46 × 1 0 7 20 79 0.24159 6.74 × 1 0 6 18 72 0.17287 6.55 × 1 0 7
x 4 2 7 0.020238 0 48 143 0.16653 8.30 × 1 0 7 20 79 0.17969 5.11 × 1 0 6 19 76 0.11226 4.77 × 1 0 7
x 5 2 7 0.018616 0 48 143 0.17932 8.75 × 1 0 7 62 247 0.32969 8.87 × 1 0 6 20 80 0.16972 4.52 × 1 0 7
x 6 2 7 0.020948 0 48 143 0.10077 9.17 × 1 0 7 59 235 0.16711 9.96 × 1 0 6 19 76 0.13653 5.51 × 1 0 7
x 7 24 96 0.29692 5.59 × 1 0 7 42 125 0.082303 8.81 × 1 0 7 20 79 0.068144 5.77 × 1 0 6 18 72 0.093846 7.54 × 1 0 7
50,000 x 1 2 7 0.071968 0 39 116 0.32624 8.43 × 1 0 7 19 75 0.22729 8.80 × 1 0 6 17 68 0.34849 7.93 × 1 0 7
x 2 2 7 0.075216 0 41 122 0.50848 9.37 × 1 0 7 20 79 0.20933 7.39 × 1 0 6 18 72 0.39047 5.44 × 1 0 7
x 2 2 7 0.074684 0 44 131 0.32692 9.16 × 1 0 7 21 83 0.25386 6.31 × 1 0 6 19 76 0.70322 4.86 × 1 0 7
x 4 2 7 0.072245 0 46 137 0.35336 8.84 × 1 0 7 21 83 0.27996 5.10 × 1 0 6 20 80 0.3986 9.70 × 1 0 7
x 5 2 7 0.12074 0 46 137 0.51805 9.34 × 1 0 7 61 243 0.79133 8.85 × 1 0 6 22 88 0.63218 8.63 × 1 0 7
x 6 2 7 0.11927 0 46 137 0.3676 9.78 × 1 0 7 59 235 0.62043 8.50 × 1 0 6 23 92 0.65767 8.62 × 1 0 7
x 7 25 100 0.38258 5.78 × 1 0 7 45 134 0.33932 8.80 × 1 0 7 21 83 0.31257 5.80 × 1 0 6 19 76 0.46319 5.61 × 1 0 7
100,000 x 1 2 7 0.16043 0 39 116 1.3011 7.72 × 1 0 7 20 79 0.44257 5.52 × 1 0 6 18 72 0.78103 3.76 × 1 0 7
x 2 2 7 0.24564 0 41 122 0.64648 8.33 × 1 0 7 21 83 0.50043 4.62 × 1 0 6 18 72 0.95344 7.69 × 1 0 7
x 2 2 7 0.17108 0 44 131 0.74197 7.92 × 1 0 7 21 83 0.45522 8.78 × 1 0 6 19 76 0.80751 6.88 × 1 0 7
x 4 2 7 0.16088 0 45 134 0.65925 9.66 × 1 0 7 21 83 0.47627 7.21 × 1 0 6 23 92 1.311 3.63 × 1 0 7
x 5 2 7 0.15733 0 46 137 0.76379 7.99 × 1 0 7 60 239 1.3303 9.73 × 1 0 6 23 92 1.2155 9.61 × 1 0 7
x 6 2 7 0.21639 0 46 137 0.75917 8.38 × 1 0 7 58 231 1.5532 9.42 × 1 0 6 26 104 1.2941 3.39 × 1 0 7
x 7 25 100 0.83643 8.23 × 1 0 7 40 119 0.70986 6.67 × 1 0 7 21 83 0.4699 8.19 × 1 0 6 20 80 1.0936 7.78 × 1 0 7
Table A2

Results of the four algorithms on Problem 2

DLPA CGD PCG PDY
DIM INP NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM
1,000 x 1 7 23 0.074425 2.25 × 1 0 8 55 163 0.022267 8.99 × 1 0 7 15 58 0.011585 8.59 × 1 0 6 13 51 0.017461 7.68 × 1 0 7
x 2 11 35 0.008824 2.20 × 1 0 8 61 181 0.023242 8.88 × 1 0 7 11 41 0.021996 9.07 × 1 0 6 15 59 0.01307 3.49 × 1 0 7
x 2 9 30 0.010689 8.89 × 1 0 9 69 205 0.026108 8.33 × 1 0 7 17 65 0.012742 6.44 × 1 0 6 16 63 0.011515 6.98 × 1 0 7
x 4 9 30 0.011504 7.88 × 1 0 9 76 226 0.027669 9.17 × 1 0 7 18 68 0.012375 6.00 × 1 0 6 18 71 0.016254 3.52 × 1 0 7
x 5 11 35 0.009612 2.92 × 1 0 8 78 232 0.0297 9.18 × 1 0 7 13 47 0.009249 7.58 × 1 0 6 18 71 0.023554 5.13 × 1 0 7
x 6 12 37 0.008803 1.84 × 1 0 8 81 241 0.090616 8.64 × 1 0 7 18 67 0.0198 5.40 × 1 0 6 18 71 0.015701 8.59 × 1 0 7
x 7 46 142 0.022041 8.99 × 1 0 7 72 214 0.099305 8.60 × 1 0 7 19 73 0.019914 6.57 × 1 0 6 17 67 0.023155 4.49 × 1 0 7
5,000 x 1 7 23 0.023975 3.90 × 1 0 9 59 175 0.10221 8.02 × 1 0 7 16 62 0.060334 9.35 × 1 0 6 14 55 0.046657 5.44 × 1 0 7
x 2 9 30 0.03979 3.05 × 1 0 10 64 190 0.11064 9.97 × 1 0 7 12 45 0.052111 8.80 × 1 0 6 15 59 0.052458 7.63 × 1 0 7
x 2 9 30 0.026774 1.21 × 1 0 9 72 214 0.31699 9.37 × 1 0 7 18 69 0.071002 6.98 × 1 0 6 17 67 0.088482 5.12 × 1 0 7
x 4 9 30 0.021167 1.05 × 1 0 9 80 238 0.19699 8.26 × 1 0 7 19 72 0.091751 6.45 × 1 0 6 18 71 0.15662 7.73 × 1 0 7
x 5 9 30 0.021479 4.13 × 1 0 10 82 244 0.15031 8.26 × 1 0 7 14 51 0.039588 6.71 × 1 0 6 19 75 0.055461 3.75 × 1 0 7
x 6 8 26 0.018379 3.71 × 1 0 10 84 250 0.14285 9.71 × 1 0 7 19 71 0.17243 5.71 × 1 0 6 19 75 0.058396 6.27 × 1 0 7
x 7 47 145 0.11168 8.78 × 1 0 7 75 223 0.18793 9.70 × 1 0 7 20 77 0.069993 6.83 × 1 0 6 17 67 0.059551 9.87 × 1 0 7
10,000 x 1 7 23 0.13461 2.20 × 1 0 9 60 178 0.1877 9.04 × 1 0 7 17 66 0.14059 6.60 × 1 0 6 14 55 0.091971 7.66 × 1 0 7
x 2 9 31 0.065445 1.28 × 1 0 10 66 196 0.21616 9.00 × 1 0 7 13 49 0.079618 6.11 × 1 0 6 16 63 0.11465 3.55 × 1 0 7
x 2 9 31 0.040512 5.98 × 1 0 10 74 220 0.55562 8.46 × 1 0 7 18 69 0.087002 9.83 × 1 0 6 17 67 0.088759 7.23 × 1 0 7
x 4 9 31 0.044424 5.14 × 1 0 10 81 241 0.23776 9.32 × 1 0 7 19 72 0.29391 9.07 × 1 0 6 19 75 0.10116 3.63 × 1 0 7
x 5 9 30 0.032833 1.80 × 1 0 10 83 247 0.25269 9.33 × 1 0 7 14 51 0.093065 9.18 × 1 0 6 19 75 0.32788 5.29 × 1 0 7
x 6 8 26 0.026055 1.56 × 1 0 10 86 256 0.26349 8.77 × 1 0 7 19 71 0.10739 8.02 × 1 0 6 19 76 0.10689 9.51 × 1 0 7
x 7 16 56 0.22972 1.33 × 1 0 7 77 229 0.60333 8.80 × 1 0 7 20 77 0.23183 9.67 × 1 0 6 18 71 0.15903 4.65 × 1 0 7
50,000 x 1 11 41 0.13894 4.01 × 1 0 7 64 190 0.71682 8.26 × 1 0 7 18 70 1.049 7.37 × 1 0 6 15 59 0.30307 5.78 × 1 0 7
x 2 12 45 0.1588 8.99 × 1 0 7 70 208 0.82915 8.23 × 1 0 7 14 53 0.43405 6.74 × 1 0 6 16 63 0.30282 7.92 × 1 0 7
x 2 11 40 0.26237 5.62 × 1 0 7 77 229 0.93068 9.67 × 1 0 7 20 77 0.46674 5.50 × 1 0 6 18 71 0.35047 5.36 × 1 0 7
x 4 11 40 0.16557 4.69 × 1 0 7 85 253 1.174 8.52 × 1 0 7 21 80 0.48238 5.07 × 1 0 6 21 84 0.65306 3.43 × 1 0 7
x 5 15 57 0.20145 6.74 × 1 0 7 87 259 1.0117 8.53 × 1 0 7 16 59 0.39189 5.02 × 1 0 6 21 84 0.47698 4.72 × 1 0 7
x 6 14 53 0.25011 5.95 × 1 0 7 90 268 1.2734 8.02 × 1 0 7 20 75 0.30461 8.93 × 1 0 6 21 84 0.45393 4.77 × 1 0 7
x 7 11 42 0.36762 7.33 × 1 0 7 81 241 1.5179 8.03 × 1 0 7 22 85 0.6344 5.40 × 1 0 6 19 75 0.50783 3.46 × 1 0 7
100,000 x 1 11 41 0.29554 5.48 × 1 0 7 65 193 1.5148 9.34 × 1 0 7 19 74 0.68754 5.22 × 1 0 6 15 59 0.55761 8.17 × 1 0 7
x 2 13 49 0.32502 5.02 × 1 0 7 71 211 1.6716 9.30 × 1 0 7 14 53 0.4871 9.52 × 1 0 6 17 67 0.73193 3.76 × 1 0 7
x 2 11 40 0.33047 7.42 × 1 0 7 79 235 2.5439 8.75 × 1 0 7 20 77 0.61388 7.78 × 1 0 6 18 72 0.7856 9.65 × 1 0 7
x 4 11 40 0.53828 6.16 × 1 0 7 86 256 2.1762 9.64 × 1 0 7 21 80 0.77524 7.17 × 1 0 6 22 88 1.4721 8.28 × 1 0 7
x 5 15 57 0.50928 9.55 × 1 0 7 88 262 2.5093 9.64 × 1 0 7 16 59 0.7333 7.07 × 1 0 6 22 88 0.92681 8.18 × 1 0 7
x 6 14 53 0.36664 8.40 × 1 0 7 91 271 2.5388 9.07 × 1 0 7 21 79 0.62957 6.32 × 1 0 6 22 88 1.0605 7.87 × 1 0 7
x 7 12 46 0.58299 8.34 × 1 0 8 82 244 4.1414 9.09 × 1 0 7 22 85 1.0149 7.69 × 1 0 6 20 80 0.85853 5.47 × 1 0 7
Table A3

Results of the four algorithms on Problem 3

DLPA CGD PCG PDY
DIM INP NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM
1,000 x 1 2 6 0.064338 0 1 2 0.067331 0 1 3 0.009234 0 2 6 0.010702 0
x 2 2 6 0.006544 0 1 2 0.00344 0 1 3 0.00508 0 2 6 0.004492 0
x 2 2 6 0.00865 0 1 2 0.012356 0 1 3 0.007402 0 2 6 0.005032 0
x 4 3 11 0.005012 0 1 3 0.007337 0 1 4 0.003342 0 2 6 0.007257 0
x 5 3 11 0.012137 0 1 3 0.004939 0 1 4 0.008983 0 2 6 0.00552 0
x 6 3 11 0.015224 0 1 3 0.002813 0 1 4 0.005557 0 2 6 0.005098 0
x 7 3 10 0.006421 0 1 2 0.004354 0 1 3 0.006646 0 2 6 0.005345 0
5,000 x 1 2 6 0.020433 0 1 2 0.006347 0 1 3 0.028224 0 2 6 0.016955 0
x 2 2 6 0.028361 0 1 2 0.019374 0 1 3 0.006462 0 2 6 0.010855 0
x 2 2 6 0.01671 0 1 2 0.014731 0 1 3 0.008709 0 2 6 0.016312 0
x 4 3 11 0.021177 0 1 3 0.008121 0 1 4 0.008264 0 2 6 0.015722 0
x 5 3 11 0.025008 0 1 3 0.008753 0 1 4 0.02051 0 2 6 0.012802 0
x 6 3 11 0.036346 0 1 3 0.008274 0 1 4 0.008641 0 2 6 0.010878 0
x 7 3 10 0.020688 0 1 2 0.009955 0 1 3 0.016083 0 2 6 0.012291 0
10,000 x 1 2 6 0.026414 0 1 2 0.015737 0 1 3 0.017399 0 2 6 0.017477 0
x 2 2 6 0.017871 0 1 2 0.011691 0 1 3 0.049863 0 2 6 0.023615 0
x 2 2 6 0.017753 0 1 2 0.010476 0 1 3 0.020938 0 2 6 0.02534 0
x 4 3 11 0.047465 0 1 3 0.017555 0 1 4 0.013493 0 2 6 0.01481 0
x 5 3 11 0.05174 0 1 3 0.015232 0 1 4 0.01854 0 2 6 0.016278 0
x 6 3 11 0.033784 0 1 3 0.018349 0 1 4 0.020158 0 2 6 0.021746 0
x 7 3 10 0.027656 0 1 2 0.015058 0 1 3 0.010327 0 2 6 0.025362 0
50,000 x 1 2 6 0.095627 0 1 2 0.039692 0 1 3 0.035776 0 2 6 0.073395 0
x 2 2 6 0.10904 0 1 2 0.05435 0 1 3 0.09452 0 2 6 0.076998 0
x 2 2 6 0.076738 0 1 2 0.10833 0 1 3 0.038586 0 2 6 0.086311 0
x 4 3 11 0.13149 0 1 3 0.055293 0 1 4 0.04676 0 2 6 0.06221 0
x 5 3 11 0.10601 0 1 3 0.055897 0 1 4 0.04606 0 2 6 0.045254 0
x 6 3 11 0.10964 0 1 3 0.042031 0 1 4 0.06606 0 2 6 0.049965 0
x 7 3 10 0.20852 0 1 2 0.041075 0 1 3 0.041229 0 2 7 0.072103 0
100,000 x 1 2 6 0.15876 0 1 2 0.077989 0 1 3 0.080857 0 2 6 0.2192 0
x 2 2 6 0.14048 0 1 2 0.14591 0 1 3 0.084118 0 2 6 0.12693 0
x 2 2 6 0.1368 0 1 2 0.093324 0 1 3 0.090872 0 2 6 0.12259 0
x 4 3 11 0.32351 0 1 3 0.081963 0 1 4 0.10948 0 2 6 0.09157 0
x 5 3 11 0.3491 0 1 3 0.082769 0 1 4 0.091172 0 2 6 0.15814 0
x 6 3 11 0.24166 0 1 3 0.079141 0 1 4 0.077533 0 2 6 0.12627 0
x 7 3 10 0.1933 0 1 2 0.10147 0 1 3 0.14648 0 2 7 0.19345 0
Table A4

Results of the four algorithms on Problem 4

DLPA CGD PCG PDY
DIM INP NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM
1,000 x 1 2 7 0.007566 0 68 203 0.018713 8.60 × 1 0 7 18 71 0.006385 9.93 × 1 0 6 15 60 0.01291 5.13 × 1 0 7
x 2 2 7 0.003992 0 71 212 0.019424 8.29 × 1 0 7 19 75 0.00879 8.75 × 1 0 6 16 64 0.008354 3.59 × 1 0 7
x 2 2 7 0.008155 0 74 221 0.030012 8.89 × 1 0 7 20 79 0.01007 7.15 × 1 0 6 16 64 0.014477 9.42 × 1 0 7
x 4 2 7 0.005105 0 76 227 0.037116 9.28 × 1 0 7 47 187 0.022131 7.83 × 1 0 6 15 60 0.019795 6.44 × 1 0 7
x 5 2 7 0.00377 0 76 227 0.030506 9.95 × 1 0 7 46 183 0.020153 9.76 × 1 0 6 17 68 0.012293 3.91 × 1 0 7
x 6 2 7 0.006567 0 77 230 0.046013 8.34 × 1 0 7 41 163 0.019948 8.77 × 1 0 6 17 68 0.015873 7.89 × 1 0 7
x 7 8 32 0.00601 3.93 × 1 0 7 74 221 0.039914 8.92 × 1 0 7 20 79 0.014447 6.89 × 1 0 6 17 68 0.01124 4.95 × 1 0 7
5,000 x 1 2 7 0.017614 0 71 212 0.098815 9.85 × 1 0 7 20 79 0.027774 5.57 × 1 0 6 16 64 0.046207 3.86 × 1 0 7
x 2 2 7 0.019141 0 74 221 0.26551 9.49 × 1 0 7 20 79 0.049376 9.80 × 1 0 6 16 64 0.043951 8.02 × 1 0 7
x 2 2 7 0.016317 0 78 233 0.11092 8.14 × 1 0 7 21 83 0.02992 8.01 × 1 0 6 17 68 0.11923 7.00 × 1 0 7
x 4 2 7 0.026088 0 80 239 0.17153 8.50 × 1 0 7 49 195 0.054733 9.46 × 1 0 6 16 64 0.035514 4.74 × 1 0 7
x 5 2 7 0.018066 0 80 239 0.15231 9.11 × 1 0 7 49 195 0.055053 8.68 × 1 0 6 17 68 0.056685 8.74 × 1 0 7
x 6 2 7 0.031287 0 80 239 0.1783 9.55 × 1 0 7 44 175 0.054003 7.79 × 1 0 6 19 76 0.098253 5.11 × 1 0 7
x 7 9 36 0.031309 1.95 × 1 0 9 78 233 0.16701 8.22 × 1 0 7 21 83 0.17667 7.84 × 1 0 6 18 72 0.040089 3.74 × 1 0 7
10,000 x 1 2 7 0.02354 0 73 218 0.12038 8.91 × 1 0 7 20 79 0.039966 7.88 × 1 0 6 16 64 0.060455 5.46 × 1 0 7
x 2 2 7 0.088279 0 76 227 0.15193 8.59 × 1 0 7 21 83 0.041329 6.94 × 1 0 6 17 68 0.060303 3.76 × 1 0 7
x 2 2 7 0.021516 0 79 236 0.13295 9.21 × 1 0 7 22 87 0.041257 5.67 × 1 0 6 17 68 0.067346 9.90 × 1 0 7
x 4 2 7 0.017872 0 81 242 0.30772 9.62 × 1 0 7 50 199 0.15237 9.84 × 1 0 6 19 76 0.097381 3.70 × 1 0 7
x 5 2 7 0.02425 0 82 245 0.15805 8.25 × 1 0 7 50 199 0.10006 9.03 × 1 0 6 18 72 0.15016 4.15 × 1 0 7
x 6 2 7 0.019202 0 82 245 0.13763 8.64 × 1 0 7 45 179 0.088824 8.11 × 1 0 6 19 76 0.26631 7.22 × 1 0 7
x 7 8 32 0.030089 8.41 × 1 0 7 79 236 0.1385 9.26 × 1 0 7 22 87 0.044205 5.50 × 1 0 6 18 72 0.073811 5.28 × 1 0 7
50,000 x 1 2 7 0.10895 0 77 230 0.52177 8.16 × 1 0 7 21 83 0.16902 8.83 × 1 0 6 17 68 0.34669 4.04 × 1 0 7
x 2 2 7 0.077531 0 79 236 0.93906 9.84 × 1 0 7 22 87 0.16177 7.78 × 1 0 6 17 68 0.29715 8.40 × 1 0 7
x 2 2 7 0.06403 0 83 248 0.51902 8.44 × 1 0 7 23 91 0.17567 6.36 × 1 0 6 18 72 0.25477 7.39 × 1 0 7
x 4 2 7 0.073066 0 85 254 0.67964 8.81 × 1 0 7 53 211 0.41507 8.75 × 1 0 6 20 80 0.31091 6.25 × 1 0 7
x 5 2 7 0.066744 0 85 254 0.75109 9.44 × 1 0 7 53 211 0.76972 8.02 × 1 0 6 20 80 0.31354 8.13 × 1 0 7
x 6 2 7 0.068082 0 85 254 0.75383 9.89 × 1 0 7 47 187 0.41794 9.80 × 1 0 6 22 88 0.45452 9.65 × 1 0 7
x 7 9 36 0.10149 2.61 × 1 0 9 83 248 0.5637 8.51 × 1 0 7 23 91 0.30462 6.13 × 1 0 6 19 76 0.33585 6.74 × 1 0 7
100,000 x 1 2 7 0.14552 0 78 233 1.1562 9.24 × 1 0 7 22 87 0.6649 6.25 × 1 0 6 17 68 0.53401 5.71 × 1 0 7
x 2 2 7 0.16965 0 81 242 1.0433 8.90 × 1 0 7 23 91 0.64192 5.51 × 1 0 6 18 72 0.63522 3.98 × 1 0 7
x 2 2 7 0.12824 0 84 251 1.1078 9.55 × 1 0 7 23 91 0.54464 8.99 × 1 0 6 19 76 0.56664 9.57 × 1 0 7
x 4 2 7 0.14059 0 86 257 1.3126 9.97 × 1 0 7 54 215 0.92493 9.10 × 1 0 6 22 88 1.1703 3.99 × 1 0 7
x 5 2 7 0.16014 0 87 260 1.2664 8.55 × 1 0 7 54 215 0.91094 8.34 × 1 0 6 24 96 1.1042 3.66 × 1 0 7
x 6 2 7 0.13215 0 87 260 1.2368 8.95 × 1 0 7 49 195 0.8193 7.49 × 1 0 6 26 104 1.2214 3.55 × 1 0 7
x 7 9 36 0.17615 5.07 × 1 0 9 84 251 1.3349 9.59 × 1 0 7 23 91 0.37693 8.68 × 1 0 6 19 76 0.58765 9.54 × 1 0 7
Table A5

Results of the four algorithms on Problem 5

DLPA CGD PCG PDY
DIM INP NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM
1,000 x 1 18 64 0.11572 8.12 × 1 0 7 90 268 0.025425 8.03 × 1 0 7 22 82 0.019189 7.48 × 1 0 6 19 75 0.015051 6.70 × 1 0 7
x 2 15 53 0.0083 2.54 × 1 0 7 89 265 0.03337 9.07 × 1 0 7 23 87 0.010034 7.31 × 1 0 6 19 75 0.034037 6.02 × 1 0 7
x 2 12 45 0.007941 1.22 × 1 0 7 88 262 0.042066 8.96 × 1 0 7 23 89 0.017377 9.31 × 1 0 6 20 79 0.022374 8.17 × 1 0 7
x 4 13 50 0.009956 3.66 × 1 0 7 89 266 0.032059 9.36 × 1 0 7 49 195 0.048036 8.45 × 1 0 6 20 80 0.014641 4.14 × 1 0 7
x 5 15 58 0.00949 1.84 × 1 0 7 87 260 0.034997 9.16 × 1 0 7 53 211 0.034512 8.38 × 1 0 6 20 80 0.026044 3.51 × 1 0 7
x 6 18 70 0.011785 1.79 × 1 0 7 84 251 0.03039 8.32 × 1 0 7 46 183 0.073719 8.80 × 1 0 6 21 84 0.029389 3.89 × 1 0 7
x 7 17 66 0.013705 1.46 × 1 0 7 90 268 0.053359 9.88 × 1 0 7 159 634 0.092086 9.47 × 1 0 6 25 99 0.039309 8.56 × 1 0 7
5,000 x 1 21 78 0.050506 1.51 × 1 0 7 97 289 0.13941 8.47 × 1 0 7 24 90 0.041675 6.36 × 1 0 6 20 79 0.095884 6.26 × 1 0 7
x 2 17 62 0.031783 1.29 × 1 0 7 96 286 0.12426 9.58 × 1 0 7 25 94 0.066431 6.24 × 1 0 6 20 79 0.11393 5.64 × 1 0 7
x 2 13 48 0.033211 3.36 × 1 0 7 95 283 0.11123 9.47 × 1 0 7 25 97 0.075421 5.86 × 1 0 6 21 83 0.15342 7.12 × 1 0 7
x 4 13 50 0.026339 9.94 × 1 0 7 97 290 0.10939 8.05 × 1 0 7 53 211 0.08762 9.11 × 1 0 6 21 84 0.091379 3.38 × 1 0 7
x 5 14 56 0.033107 1.89 × 1 0 7 94 281 0.113 9.87 × 1 0 7 58 231 0.16965 8.56 × 1 0 6 21 84 0.059725 4.47 × 1 0 7
x 6 14 56 0.12118 5.06 × 1 0 7 91 272 0.24111 8.88 × 1 0 7 50 199 0.13724 7.65 × 1 0 6 21 84 0.10123 6.59 × 1 0 7
x 7 29 113 0.076196 1.44 × 1 0 7 97 289 0.12998 9.25 × 1 0 7 569 2274 0.93626 9.86 × 1 0 6 28 111 0.099315 7.71 × 1 0 7
10,000 x 1 25 93 0.088469 1.16 × 1 0 7 100 298 0.30427 8.69 × 1 0 7 25 94 0.058359 5.40 × 1 0 6 20 79 0.28404 9.79 × 1 0 7
x 2 15 54 0.053192 1.05 × 1 0 7 99 295 0.28747 9.83 × 1 0 7 25 94 0.08002 8.90 × 1 0 6 20 79 0.096846 8.67 × 1 0 7
x 2 14 51 0.048681 3.00 × 1 0 7 98 292 0.2542 9.72 × 1 0 7 25 97 0.10477 8.64 × 1 0 6 22 87 0.25671 4.07 × 1 0 7
x 4 14 54 0.042491 8.70 × 1 0 8 100 299 0.53275 8.29 × 1 0 7 55 219 0.12014 9.11 × 1 0 6 23 92 0.13161 4.76 × 1 0 7
x 5 14 55 0.044649 2.21 × 1 0 7 98 293 0.32311 8.14 × 1 0 7 60 239 0.1399 9.01 × 1 0 6 21 84 0.099765 7.05 × 1 0 7
x 6 19 75 0.086768 1.46 × 1 0 7 94 281 0.24911 9.15 × 1 0 7 51 203 0.13946 9.62 × 1 0 6 21 84 0.12319 5.31 × 1 0 7
x 7 29 113 0.12611 1.25 × 1 0 7 101 301 0.20685 8.70 × 1 0 7 559 2234 1.5095 9.85 × 1 0 6 23 92 0.15411 8.26 × 1 0 7
50,000 x 1 35 133 0.68704 6.38 × 1 0 7 107 319 1.0017 9.16 × 1 0 7 26 98 0.24433 6.75 × 1 0 6 23 92 0.59352 4.69 × 1 0 7
x 2 22 82 0.28951 1.34 × 1 0 7 107 319 0.91165 8.28 × 1 0 7 27 102 0.55775 5.16 × 1 0 6 23 92 0.38652 4.37 × 1 0 7
x 2 13 49 0.13459 1.54 × 1 0 7 106 316 0.90267 8.19 × 1 0 7 27 105 0.34906 5.28 × 1 0 6 22 88 0.4519 8.93 × 1 0 7
x 4 14 54 0.21802 2.09 × 1 0 7 107 320 1.0227 8.77 × 1 0 7 60 239 0.5385 8.66 × 1 0 6 24 96 0.53036 5.83 × 1 0 7
x 5 22 85 0.26387 5.56 × 1 0 7 105 314 0.85573 8.61 × 1 0 7 65 259 0.77201 9.05 × 1 0 6 24 96 0.66064 5.87 × 1 0 7
x 6 18 72 0.26261 1.87 × 1 0 7 101 302 1.0495 9.68 × 1 0 7 56 223 0.49952 8.19 × 1 0 6 23 92 0.72231 8.28 × 1 0 7
x 7 33 128 1.0173 1.90 × 1 0 7 107 319 0.83276 8.48 × 1 0 7 26 104 0.49014 4.88 × 1 0 7
100,000 x 1 149 588 6.2214 1.33 × 1 0 7 110 328 1.6519 9.39 × 1 0 7 26 98 0.41049 9.73 × 1 0 6 24 96 0.90228 8.11 × 1 0 7
x 2 38 146 1.4058 8.44 × 1 0 7 110 328 1.6591 8.50 × 1 0 7 27 102 0.42642 7.39 × 1 0 6 24 96 0.78175 7.59 × 1 0 7
x 2 14 52 0.26454 2.60 × 1 0 7 109 325 2.5612 8.40 × 1 0 7 27 105 0.46635 7.77 × 1 0 6 23 92 1.3826 4.30 × 1 0 7
x 4 14 54 0.29312 4.95 × 1 0 7 110 329 3.1086 9.00 × 1 0 7 62 247 1.2073 9.00 × 1 0 6 25 100 1.0341 3.79 × 1 0 7
x 5 19 74 0.45344 1.72 × 1 0 7 108 323 1.8875 8.84 × 1 0 7 67 267 1.2454 9.50 × 1 0 6 25 100 0.93248 5.83 × 1 0 7
x 6 20 79 0.61649 6.04 × 1 0 7 104 311 1.7326 9.94 × 1 0 7 58 231 1.2086 8.32 × 1 0 6 26 104 1.1311 3.96 × 1 0 7
x 7 65 257 2.9203 9.58 × 1 0 8 110 328 2.7078 9.83 × 1 0 7 24 96 0.84643 9.31 × 1 0 7
Table A6

Results of the four algorithms on Problem 6

DLPA CGD PCG PDY
DIM INP NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM
1,000 x 1 21 84 0.12688 9.09 × 1 0 7 83 248 0.071177 8.42 × 1 0 7 23 91 0.023983 9.28 × 1 0 6 18 72 0.029908 4.82 × 1 0 7
x 2 21 84 0.020466 8.74 × 1 0 7 83 248 0.10724 8.09 × 1 0 7 23 91 0.027114 8.92 × 1 0 6 18 72 0.024943 4.64 × 1 0 7
x 2 21 84 0.024501 7.70 × 1 0 7 82 245 0.052298 8.91 × 1 0 7 23 91 0.020109 7.86 × 1 0 6 18 72 0.036859 4 . 08 × 1 0 7
x 4 21 84 0.086669 5.27 × 1 0 7 80 239 0.070087 9.53 × 1 0 7 23 91 0.027474 5.38 × 1 0 6 17 68 0.032227 8.34 × 1 0 7
x 5 21 84 0.058296 4.23 × 1 0 7 79 236 0.07049 9.56 × 1 0 7 22 87 0.025722 8.62 × 1 0 6 17 68 0.032558 6.69 × 1 0 7
x 6 20 80 0.035393 6.23 × 1 0 7 77 230 0.062182 8.81 × 1 0 7 22 87 0.031593 5.08 × 1 0 6 17 68 0.047723 3.94 × 1 0 7
x 7 21 84 0.038427 7.75 × 1 0 7 82 245 0.038371 9.02 × 1 0 7 23 91 0.012772 7.96 × 1 0 6 18 72 0.019878 4.13 × 1 0 7
5,000 x 1 22 88 0.094735 8.14 × 1 0 7 86 257 0.25551 9.65 × 1 0 7 25 99 0.19274 5.22 × 1 0 6 19 76 0.083253 3.58 × 1 0 7
x 2 22 88 0.19628 7.83 × 1 0 7 86 257 0.1568 9.28 × 1 0 7 25 99 0.09272 5.02 × 1 0 6 19 76 0.086515 3.44 × 1 0 7
x 2 22 88 0.1282 6.90 × 1 0 7 86 257 0.19494 8.17 × 1 0 7 24 95 0.1227 8.82 × 1 0 6 18 72 0.1021 9.14 × 1 0 7
x 4 22 88 0.065248 4.72 × 1 0 7 84 251 0.30807 8.74 × 1 0 7 24 95 0.095787 6.04 × 1 0 6 18 72 0.14778 6.26 × 1 0 7
x 5 21 84 0.069903 9.47 × 1 0 7 83 248 0.17631 8.77 × 1 0 7 23 91 0.14077 9.67 × 1 0 6 18 72 0.13795 5.02 × 1 0 7
x 6 21 84 0.11307 5.58 × 1 0 7 81 242 0.24897 8.08 × 1 0 7 23 91 0.11954 5.70 × 1 0 6 17 68 0.075734 8.83 × 1 0 7
x 7 22 88 0.065136 6.97 × 1 0 7 86 257 0.15006 8.24 × 1 0 7 24 95 0.13915 8.88 × 1 0 6 18 72 0.13512 9.24 × 1 0 7
10,000 x 1 23 92 0.1221 4.61 × 1 0 7 88 263 0.51788 8.73 × 1 0 7 25 99 0.1485 7.38 × 1 0 6 21 84 0.25873 4.00 × 1 0 7
x 2 23 92 0.14337 4.43 × 1 0 7 88 263 0.32563 8.40 × 1 0 7 25 99 0.14871 7.09 × 1 0 6 21 84 0.20225 3.85 × 1 0 7
x 2 22 88 0.11801 9.76 × 1 0 7 87 260 0.49233 9.25 × 1 0 7 25 99 0.13531 6.25 × 1 0 6 20 80 0.17464 5.83 × 1 0 7
x 4 22 88 0.12562 6.68 × 1 0 7 85 254 0.39993 9.89 × 1 0 7 24 95 0.10431 8.54 × 1 0 6 18 72 0.13991 8.85 × 1 0 7
x 5 22 88 0.11859 5.36 × 1 0 7 84 251 0.349 9.92 × 1 0 7 24 95 0.17301 6.85 × 1 0 6 18 72 0.29666 7.10 × 1 0 7
x 6 21 84 0.19772 7.90 × 1 0 7 82 245 0.52865 9.14 × 1 0 7 23 91 0.10609 8.06 × 1 0 6 18 72 0.22131 4.19 × 1 0 7
x 7 22 88 0.10854 9.82 × 1 0 7 87 260 0.28236 9.32 × 1 0 7 25 99 0.16564 6.30 × 1 0 6 20 80 0.19497 5.88 × 1 0 7
50,000 x 1 24 96 0.50781 4.12 × 1 0 7 91 272 1.2718 1.00 × 1 0 6 26 103 0.51603 8.26 × 1 0 6 24 96 1.0495 7.08 × 1 0 7
x 2 23 92 0.55031 9.91 × 1 0 7 91 272 1.3593 9.61 × 1 0 7 26 103 0.66303 7.95 × 1 0 6 24 96 1.7579 6.81 × 1 0 7
x 2 23 92 0.42608 8.73 × 1 0 7 91 272 1.4765 8.47 × 1 0 7 26 103 0.61558 7.00 × 1 0 6 23 92 1.1874 7.26 × 1 0 7
x 4 23 92 0.6008 5.97 × 1 0 7 89 266 1.6649 9.06 × 1 0 7 25 99 0.51195 9.56 × 1 0 6 21 84 0.72374 5.18 × 1 0 7
x 5 23 92 0.48214 4.79 × 1 0 7 88 263 1.5237 9.08 × 1 0 7 25 99 0.68016 7.67 × 1 0 6 21 84 0.78 4.16 × 1 0 7
x 6 22 88 0.52379 7.06 × 1 0 7 86 257 1.2773 8.37 × 1 0 7 24 95 0.39061 9.03 × 1 0 6 18 72 0.72028 9.36 × 1 0 7
x 7 23 92 0.45929 8.80 × 1 0 7 91 272 1.3703 8.54 × 1 0 7 26 103 0.38084 7.06 × 1 0 6 23 92 1.1136 7.32 × 1 0 7
100,000 x 1 24 96 1.4971 5.83 × 1 0 7 93 278 3.9137 9.05 × 1 0 7 27 107 0.96586 5.86 × 1 0 6 29 116 2.962 5.93 × 1 0 7
x 2 24 96 1.1772 5.60 × 1 0 7 93 278 4.5167 8.70 × 1 0 7 27 107 1.5967 5.63 × 1 0 6 28 112 2.7883 6.09 × 1 0 7
x 2 24 96 1.6448 4.94 × 1 0 7 92 275 3.3405 9.58 × 1 0 7 26 103 1.9817 9.90 × 1 0 6 26 104 2.3147 6.39 × 1 0 7
x 4 23 92 1.1469 8.45 × 1 0 7 91 272 2.7922 8.20 × 1 0 7 26 103 2.7285 6.78 × 1 0 6 23 92 1.9814 7.03 × 1 0 7
x 5 23 92 1.1788 6.78 × 1 0 7 90 269 2.8264 8.22 × 1 0 7 26 103 1.8455 5.44 × 1 0 6 22 88 2.0523 3.66 × 1 0 7
x 6 22 88 1.093 9.99 × 1 0 7 87 260 3.4285 9.47 × 1 0 7 25 99 1.9817 6.40 × 1 0 6 20 80 1.7103 5.97 × 1 0 7
x 7 24 96 1.0764 4.97 × 1 0 7 92 275 3.8977 9.66 × 1 0 7 26 103 2.1952 9.98 × 1 0 6 26 104 2.3932 6.44 × 1 0 7
Table A7

Results of the four algorithms on Problem 7

DLPA CGD PCG PDY
DIM INP NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM
1,000 x 1 17 68 0.083981 4.49 × 1 0 7 37 110 0.019682 9.48 × 1 0 7 17 67 0.016872 6.98 × 1 0 6 17 68 0.026653 6.92 × 1 0 7
x 2 16 64 0.012269 9.72 × 1 0 7 37 110 0.027382 6.87 × 1 0 7 15 59 0.019264 9.89 × 1 0 6 17 68 0.016944 4.34 × 1 0 7
x 2 13 52 0.058015 6.17 × 1 0 7 30 89 0.027834 6.51 × 1 0 7 16 63 0.016796 5.79 × 1 0 6 5 20 0.010062 4.50 × 1 0 8
x 4 18 72 0.015398 4.23 × 1 0 7 38 113 0.016308 8.05 × 1 0 7 16 63 0.017904 5.21 × 1 0 6 18 72 0.026704 8.82 × 1 0 7
x 5 16 64 0.011569 8.59 × 1 0 7 38 113 0.014493 8.05 × 1 0 7 19 75 0.025318 4.95 × 1 0 6 19 76 0.028901 8.09 × 1 0 7
x 6 18 71 0.009617 7.34 × 1 0 7 37 109 0.028759 9.07 × 1 0 7 18 70 0.016732 8.93 × 1 0 6 18 71 0.063023 5.23 × 1 0 7
x 7 13 52 0.019738 8.86 × 1 0 7 37 110 0.091408 6.43 × 1 0 7 20 79 0.022927 8.43 × 1 0 6 19 76 0.020215 4.06 × 1 0 7
5,000 x 1 18 72 0.052581 3.27 × 1 0 7 39 116 0.095833 8.30 × 1 0 7 18 71 0.03167 7.60 × 1 0 6 18 72 0.10268 5.59 × 1 0 7
x 2 17 68 0.070173 7.09 × 1 0 7 38 113 0.054115 9.60 × 1 0 7 17 67 0.039779 5.25 × 1 0 6 17 68 0.074498 9.70 × 1 0 7
x 2 14 56 0.060778 4.50 × 1 0 7 31 92 0.051308 9.10 × 1 0 7 17 67 0.068334 6.31 × 1 0 6 5 20 0.020211 1.01 × 1 0 7
x 4 18 72 0.091829 9.47 × 1 0 7 40 119 0.23451 7.05 × 1 0 7 17 67 0.048196 5.68 × 1 0 6 19 76 0.099238 7.14 × 1 0 7
x 5 17 68 0.045632 6.26 × 1 0 7 40 119 0.067906 7.05 × 1 0 7 20 79 0.03719 5.39 × 1 0 6 20 80 0.092164 6.56 × 1 0 7
x 6 19 75 0.11633 5.35 × 1 0 7 39 115 0.19904 7.94 × 1 0 7 19 74 0.053223 9.73 × 1 0 6 19 75 0.075687 4.22 × 1 0 7
x 7 14 56 0.048395 6.43 × 1 0 7 38 113 0.071364 9.08 × 1 0 7 21 83 0.15667 9.44 × 1 0 6 19 76 0.063314 9.28 × 1 0 7
10,000 x 1 18 72 0.17129 4.63 × 1 0 7 40 119 0.093023 7.34 × 1 0 7 19 75 0.070437 5.23 × 1 0 6 18 72 0.13282 7.90 × 1 0 7
x 2 18 72 0.060602 3.27 × 1 0 7 39 116 0.16625 8.50 × 1 0 7 17 67 0.097411 7.42 × 1 0 6 18 72 0.12721 4.95 × 1 0 7
x 2 14 56 0.049135 6.37 × 1 0 7 32 95 0.075743 8.05 × 1 0 7 17 67 0.082943 8.92 × 1 0 6 5 20 0.039498 1.42 × 1 0 7
x 4 19 76 0.059539 4.36 × 1 0 7 40 119 0.097377 9.96 × 1 0 7 17 67 0.060019 8.03 × 1 0 6 20 80 0.70627 3.66 × 1 0 7
x 5 17 68 0.084777 8.85 × 1 0 7 40 119 0.28044 9.96 × 1 0 7 20 79 0.082028 7.62 × 1 0 6 20 80 0.12466 9.28 × 1 0 7
x 6 19 75 0.058088 7.57 × 1 0 7 40 118 0.094613 7.02 × 1 0 7 20 78 0.079499 6.70 × 1 0 6 21 84 0.13558 4.36 × 1 0 7
x 7 14 56 0.052278 9.04 × 1 0 7 39 116 0.13431 7.95 × 1 0 7 22 87 0.085235 6.48 × 1 0 6 20 80 0.12505 4.67 × 1 0 7
50,000 x 1 19 76 0.25617 3.37 × 1 0 7 42 125 0.58223 6.42 × 1 0 7 20 79 0.21498 5.70 × 1 0 6 19 76 0.4006 6.42 × 1 0 7
x 2 18 72 0.27017 7.30 × 1 0 7 41 122 0.74287 7.43 × 1 0 7 18 71 0.27401 8.08 × 1 0 6 19 76 0.65686 4.02 × 1 0 7
x 2 15 60 0.28348 4.64 × 1 0 7 34 101 0.4446 7.04 × 1 0 7 18 71 0.20877 9.71 × 1 0 6 5 20 0.15127 3.18 × 1 0 7
x 4 19 76 0.26161 9.76 × 1 0 7 42 125 0.49001 8.72 × 1 0 7 18 71 0.23176 8.75 × 1 0 6 21 84 0.74231 8.23 × 1 0 7
x 5 18 72 0.33298 6.45 × 1 0 7 42 125 0.57162 8.72 × 1 0 7 21 83 0.26653 8.30 × 1 0 6 21 84 0.49394 7.14 × 1 0 7
x 6 20 79 0.24655 5.52 × 1 0 7 41 121 0.42754 9.82 × 1 0 7 21 82 0.28457 7.30 × 1 0 6 21 84 0.51964 9.75 × 1 0 7
x 7 15 60 0.23774 6.60 × 1 0 7 41 122 1.2126 6.99 × 1 0 7 23 91 0.38975 7.13 × 1 0 6 21 84 0.49803 3.81 × 1 0 7
100,000 x 1 19 76 0.58336 4.77 × 1 0 7 42 125 1.6004 9.08 × 1 0 7 20 79 0.70463 8.06 × 1 0 6 20 80 0.93073 7.45 × 1 0 7
x 2 19 76 0.64199 3.37 × 1 0 7 42 125 1.4738 6.58 × 1 0 7 19 75 0.52768 5.57 × 1 0 6 19 76 1.0061 5.69 × 1 0 7
x 2 15 60 0.39765 6.56 × 1 0 7 34 101 1.0253 9.96 × 1 0 7 19 75 0.70047 6.69 × 1 0 6 5 20 0.25727 4.50 × 1 0 7
x 4 20 80 0.49832 4.50 × 1 0 7 43 128 1.2432 7.71 × 1 0 7 19 75 0.55612 6.03 × 1 0 6 22 88 1.5836 4.22 × 1 0 7
x 5 18 72 0.85516 9.12 × 1 0 7 43 128 0.88451 7.71 × 1 0 7 22 87 0.44513 5.72 × 1 0 6 22 88 1.2106 7.50 × 1 0 7
x 6 20 79 0.45904 7.80 × 1 0 7 42 124 0.88709 8.69 × 1 0 7 22 86 1.2556 5.03 × 1 0 6 22 88 1.1085 5.00 × 1 0 7
x 7 15 60 0.44029 9.35 × 1 0 7 41 122 1.4869 9.91 × 1 0 7 24 95 1.2277 4.90 × 1 0 6 20 80 1.2 6.65 × 1 0 7
Table A8

Results of the four algorithms on Problem 8

DLPA CGD PCG PDY
DIM INP NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM
1,000 x 1 24 96 0.121 8.10 × 1 0 7 122 365 0.11869 9.16 × 1 0 7 23 91 0.13588 7.76 × 1 0 6 36 144 0.22366 6.34 × 1 0 7
x 2 24 96 0.089346 9.24 × 1 0 7 133 398 0.56961 9.00 × 1 0 7 16 63 0.026989 8.13 × 1 0 6 35 140 0.19954 9.13 × 1 0 7
x 2 24 96 0.068816 8.67 × 1 0 7 131 392 0.47059 8.98 × 1 0 7 95 379 0.20295 9.34 × 1 0 6 35 140 0.23781 7.34 × 1 0 7
x 4 18 72 0.10055 5.28 × 1 0 7 121 362 0.34358 9.71 × 1 0 7 15 59 0.048978 3.76 × 1 0 6 33 132 0.17271 2.30 × 1 0 7
x 5 32 127 0.18279 9.38 × 1 0 7 129 386 0.17558 9.64 × 1 0 7 40 159 0.061018 9.13 × 1 0 6 31 124 0.19467 8.06 × 1 0 7
x 6 15 60 0.050172 6.41 × 1 0 7 135 404 0.26548 9.28 × 1 0 7 84 335 0.15475 8.89 × 1 0 6 24 96 0.13059 9.72 × 1 0 7
x 7 19 76 0.082546 3.69 × 1 0 7 132 395 0.19928 9.09 × 1 0 7 18 71 0.030581 4.30 × 1 0 6 27 108 0.16629 8.73 × 1 0 7
5,000 x 1 15 60 0.2735 6.41 × 1 0 7 121 362 0.86438 9.10 × 1 0 7 20 79 0.28292 8.49 × 1 0 6 34 136 1.1511 8.36 × 1 0 7
x 2 15 60 0.29489 6.43 × 1 0 7 131 392 0.65074 9.95 × 1 0 7 17 67 0.21356 7.70 × 1 0 6 34 136 1.0712 7.93 × 1 0 7
x 2 22 88 0.33952 8.73 × 1 0 7 129 386 0.61924 9.76 × 1 0 7 91 363 0.68882 9.74 × 1 0 6 34 136 0.83252 6.18 × 1 0 7
x 4 15 60 0.335 8.19 × 1 0 7 120 359 0.69048 9.47 × 1 0 7 16 63 0.1174 3.34 × 1 0 6 31 124 0.735 3.90 × 1 0 7
x 5 27 107 0.51327 6.53 × 1 0 7 128 383 0.88549 9.39 × 1 0 7 42 167 0.38438 8.96 × 1 0 6 30 120 0.97382 8.11 × 1 0 7
x 6 14 56 0.20985 5.35 × 1 0 7 134 401 0.87261 9.02 × 1 0 7 83 331 0.56573 8.81 × 1 0 6 24 96 1.4208 7.51 × 1 0 7
x 7 19 76 0.28623 8.52 × 1 0 7 126 377 0.62182 9.94 × 1 0 7 18 71 0.20442 6.02 × 1 0 6 28 112 1.1575 3.64 × 1 0 7
10,000 x 1 15 60 0.80511 8.43 × 1 0 7 120 359 1.6795 9.61 × 1 0 7 20 79 0.30454 6.25 × 1 0 6 34 136 1.9231 6.78 × 1 0 7
x 2 15 60 0.43963 5.86 × 1 0 7 131 392 1.3938 9.40 × 1 0 7 18 71 0.22616 3.80 × 1 0 6 34 136 1.7019 6.42 × 1 0 7
x 2 17 68 0.44729 5.54 × 1 0 7 129 386 1.33 9.21 × 1 0 7 90 359 1.4221 9.37 × 1 0 6 33 132 1.7108 7.57 × 1 0 7
x 4 16 64 0.44231 6.80 × 1 0 7 119 356 1.279 9.97 × 1 0 7 16 63 0.22523 5.07 × 1 0 6 30 120 1.7966 3.94 × 1 0 7
x 5 50 199 1.6118 7.61 × 1 0 7 127 380 1.6064 9.89 × 1 0 7 41 163 0.67387 9.21 × 1 0 6 30 120 1.4776 5.57 × 1 0 7
x 6 14 56 0.37115 5.74 × 1 0 7 133 398 1.4382 9.49 × 1 0 7 82 327 1.0622 9.52 × 1 0 6 24 96 1.2013 7.21 × 1 0 7
x 7 20 80 0.51137 3.80 × 1 0 7 131 392 1.2211 9.11 × 1 0 7 18 71 0.20591 9.14 × 1 0 6 27 108 1.3388 4.04 × 1 0 7
50,000 x 1 16 64 1.9575 6.26 × 1 0 7 119 356 5.181 9.39 × 1 0 7 19 75 1.0701 9.22 × 1 0 6 34 136 8.0529 6.35 × 1 0 7
x 2 15 60 1.8031 5.00 × 1 0 7 130 389 5.2424 9.16 × 1 0 7 18 71 0.92029 9.72 × 1 0 6 33 132 7.0637 6.12 × 1 0 7
x 2 18 72 2.1992 7.62 × 1 0 7 128 383 4.7286 8.97 × 1 0 7 90 359 4.8887 9.77 × 1 0 6 32 128 6.4247 7.22 × 1 0 7
x 4 17 68 2.1051 3.66 × 1 0 7 118 353 4.5822 9.73 × 1 0 7 17 67 1.0519 4.14 × 1 0 6 24 96 5.1476 3.36 × 1 0 7
x 5 5 18 0.55214 NaN 126 377 4.8601 9.64 × 1 0 7 36 143 1.7989 9.56 × 1 0 6 29 116 6.3711 5.83 × 1 0 7
x 6 13 52 1.6167 9.88 × 1 0 7 132 395 4.8457 9.25 × 1 0 7 81 323 4.1491 9.45 × 1 0 6 31 124 7.1893 7.91 × 1 0 7
x 7 20 80 2.5774 8.82 × 1 0 7 117 350 4.5869 9.21 × 1 0 7 19 75 1.2553 6.26 × 1 0 6 27 108 7.1677 3.63 × 1 0 7
100,000 x 1 15 60 3.6604 8.94 × 1 0 7 118 353 12.728 9.89 × 1 0 7 20 79 2.6498 4.35 × 1 0 6 33 132 16.6928 8.00 × 1 0 7
x 2 15 60 3.6973 4.35 × 1 0 7 129 386 10.4123 9.65 × 1 0 7 19 75 2.1614 4.75 × 1 0 6 33 132 14.4679 7.49 × 1 0 7
x 2 16 64 3.9466 9.81 × 1 0 7 127 380 10.2791 9.45 × 1 0 7 88 351 9.9967 9.52 × 1 0 6 40 160 17.8282 9.75 × 1 0 7
x 4 16 64 4.2131 5.80 × 1 0 7 118 353 14.9932 9.18 × 1 0 7 17 67 1.9231 5.98 × 1 0 6 30 120 14.5748 9.85 × 1 0 7
x 5 6 22 2.2633 NaN 126 377 18.8192 9.09 × 1 0 7 34 135 4.0087 8.54 × 1 0 6 28 112 12.3016 9.46 × 1 0 7
x 6 14 56 3.9274 7.89 × 1 0 7 131 392 15.0586 9.74 × 1 0 7 81 323 8.2856 8.86 × 1 0 6 26 104 11.8251 9.05 × 1 0 7
x 7 21 84 5.8799 4.18 × 1 0 7 128 383 11.2583 9.22 × 1 0 7 19 75 2.7705 8.98 × 1 0 6 29 116 16.416 3.15 × 1 0 7
Table A9

Results of the four algorithms on Problem 9

DLPA CGD PCG PDY
DIM INP NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM NITER NFE CPU NM
1,000 x 1 12 42 0.056128 7.33 × 1 0 7 39 113 0.017721 9.55 × 1 0 7 9 31 0.00876 7.60 × 1 0 6 11 42 0.010225 2.67 × 1 0 7
x 2 12 42 0.005951 7.33 × 1 0 7 40 116 0.018402 9.95 × 1 0 7 9 31 0.009756 7.60 × 1 0 6 11 42 0.018966 2.67 × 1 0 7
x 2 12 42 0.005343 7.33 × 1 0 7 39 113 0.012425 8.32 × 1 0 7 9 31 0.009129 7.60 × 1 0 6 11 42 0.016048 2.67 × 1 0 7
x 4 12 42 0.006353 7.33 × 1 0 7 39 113 0.012779 9.61 × 1 0 7 9 31 0.00785 7.60 × 1 0 6 11 42 0.014648 2.67 × 1 0 7
x 5 12 42 0.006258 7.33 × 1 0 7 41 119 0.013855 8.35 × 1 0 7 9 31 0.004864 7.60 × 1 0 6 11 42 0.021 2.67 × 1 0 7
x 6 12 42 0.007773 7.33 × 1 0 7 40 116 0.024656 8.96 × 1 0 7 9 31 0.005108 7.60 × 1 0 6 12 46 0.01818 2.67 × 1 0 7
x 7 12 42 0.005644 7.33 × 1 0 7 9 31 0.004923 7.60 × 1 0 6 11 42 0.032639 2.67 × 1 0 7
5,000 x 1 7 25 0.012808 6.36 × 1 0 7 26 75 0.030602 8.55 × 1 0 7 7 25 0.016685 1.30 × 1 0 6 8 31 0.03543 1.59 × 1 0 7
x 2 7 25 0.01358 6.36 × 1 0 7 26 75 0.045122 9.12 × 1 0 7 7 25 0.046745 1.30 × 1 0 6 8 31 0.056933 1.59 × 1 0 7
x 2 7 25 0.017767 6.36 × 1 0 7 27 78 0.052963 6.80 × 1 0 7 7 25 0.01804 1.30 × 1 0 6 8 31 0.026644 1.59 × 1 0 7
x 4 7 25 0.018029 6.36 × 1 0 7 26 75 0.065225 9.37 × 1 0 7 7 25 0.017476 1.30 × 1 0 6 9 35 0.075909 1.59 × 1 0 7
x 5 7 25 0.011563 6.36 × 1 0 7 25 72 0.033503 7.58 × 1 0 7 7 25 0.015564 1.30 × 1 0 6 9 35 0.18498 1.59 × 1 0 7
x 6 7 25 0.014182 6.36 × 1 0 7 27 78 0.033909 9.60 × 1 0 7 7 25 0.016409 1.30 × 1 0 6 9 35 0.0478 1.59 × 1 0 7
x 7 7 25 0.05434 6.36 × 1 0 7 7 25 0.010877 1.30 × 1 0 6 8 31 0.028334 1.59 × 1 0 7
10,000 x 1 10 38 0.060643 3.25 × 1 0 7 18 51 0.046853 6.30 × 1 0 7 5 17 0.098067 5.06 × 1 0 6 11 43 0.12467 7.22 × 1 0 7
x 2 10 38 0.038555 3.25 × 1 0 7 16 45 0.047294 9.85 × 1 0 7 5 17 0.019929 5.06 × 1 0 6 11 43 0.099162 7.22 × 1 0 7
x 2 10 38 0.07138 3.25 × 1 0 7 18 51 0.053849 9.88 × 1 0 7 5 17 0.028445 5.06 × 1 0 6 11 43 0.093027 7.22 × 1 0 7
x 4 10 38 0.039484 3.25 × 1 0 7 19 54 0.051015 8.23 × 1 0 7 5 17 0.026973 5.06 × 1 0 6 12 47 0.11308 7.22 × 1 0 7
x 5 10 38 0.044483 3.25 × 1 0 7 15 42 0.038984 8.78 × 1 0 7 5 17 0.01959 5.06 × 1 0 6 13 51 0.16591 7.22 × 1 0 7
x 6 10 38 0.040898 3.25 × 1 0 7 19 54 0.060987 9.43 × 1 0 7 5 17 0.019229 5.06 × 1 0 6 13 51 0.1536 7.22 × 1 0 7
x 7 10 38 0.062344 3.25 × 1 0 7 35 102 0.3037 9.01 × 1 0 7 5 17 0.017972 5.06 × 1 0 6 11 43 0.12267 7.22 × 1 0 7
50,000 x 1 7 27 0.11482 1.48 × 1 0 7 9 25 0.080642 8.47 × 1 0 7 8 30 0.114 5.15 × 1 0 6 10 40 0.62486 7.59 × 1 0 7
x 2 7 27 0.1128 1.48 × 1 0 7 9 25 0.12277 6.99 × 1 0 7 8 30 0.12507 5.15 × 1 0 6 10 40 0.41968 7.59 × 1 0 7
x 2 7 27 0.23076 1.48 × 1 0 7 9 25 0.076181 4.40 × 1 0 7 8 30 0.091014 5.15 × 1 0 6 11 44 0.87615 7.59 × 1 0 7
x 4 7 27 0.11882 1.48 × 1 0 7 9 25 0.079855 6.19 × 1 0 7 8 30 0.096467 5.15 × 1 0 6 13 52 0.88891 7.59 × 1 0 7
x 5 7 27 0.11497 1.48 × 1 0 7 9 25 0.077687 6.85 × 1 0 7 8 30 0.10931 5.15 × 1 0 6 14 56 1.0407 7.59 × 1 0 7
x 6 7 27 0.17967 1.48 × 1 0 7 9 25 0.083099 6.09 × 1 0 7 8 30 0.12497 5.15 × 1 0 6 16 64 1.6327 7.59 × 1 0 7
x 7 7 27 0.11331 1.48 × 1 0 7 115 343 1.2562 9.80 × 1 0 7 8 30 0.096746 5.15 × 1 0 6 11 44 0.80202 7.59 × 1 0 7
100,000 x 1 9 35 0.51704 8.64 × 1 0 7 69 205 1.4373 9.10 × 1 0 7 6 22 0.15218 6.81 × 1 0 7 9 36 1.186 2.19 × 1 0 7
x 2 9 35 0.47733 8.64 × 1 0 7 69 205 1.4591 9.06 × 1 0 7 6 22 0.16073 6.81 × 1 0 7 9 36 1.0253 2.19 × 1 0 7
x 2 9 35 0.35461 8.64 × 1 0 7 67 199 1.3156 9.59 × 1 0 7 6 22 0.20701 6.81 × 1 0 7 11 44 1.4569 2.19 × 1 0 7
x 4 9 35 0.35647 8.64 × 1 0 7 71 211 1.4115 9.67 × 1 0 7 6 22 0.14843 6.81 × 1 0 7 14 56 2.1625 2.19 × 1 0 7
x 5 9 35 0.46511 8.64 × 1 0 7 65 193 1.3448 9.73 × 1 0 7 6 22 0.14879 6.81 × 1 0 7 16 64 3.2902 2.19 × 1 0 7
x 6 9 35 0.36995 8.64 × 1 0 7 41 121 0.86627 9.14 × 1 0 7 6 22 0.19292 6.81 × 1 0 7 18 72 4.4108 2.19 × 1 0 7
x 7 9 35 0.62203 8.64 × 1 0 7 114 340 2.054 9.77 × 1 0 7 6 22 0.22237 6.81 × 1 0 7 11 44 1.6644 2.19 × 1 0 7

References

[1] N. A. Iusem and V. M. Solodov, Newton-type methods with generalized distances for constrained optimization, Optimization 41 (1997), no. 3, 257–278. 10.1080/02331939708844339Search in Google Scholar

[2] A. H. Ibrahim, P. Kumam, W. Kumam, A family of derivative-free conjugate gradient methods for constrained nonlinear equations and image restoration, IEEE Access 8 (2020), 162714–162729. 10.1109/ACCESS.2020.3020969Search in Google Scholar

[3] B. Ghaddar, J. Marecek, and M. Mevissen, Optimal power flow as a polynomial optimization problem, IEEE Trans. Power Syst. 31 (2016), no. 1, 539–546. 10.1109/TPWRS.2015.2390037Search in Google Scholar

[4] Z. Dai and J. Kang, Some new efficient mean-variance portfolio selection models, Int. J. Finance Econom. 27 (2022), no. 4, 4784–4796.10.1002/ijfe.2400Search in Google Scholar

[5] Z. Dai, X. Dong, J. Kang, and L. Hong, Forecasting stock market returns: New technical indicators and two-step economic constraint method, North Am. J. Econom. Finance 53 (2020), 101216. 10.1016/j.najef.2020.101216Search in Google Scholar

[6] W. Sun and Y. X. Yuan, Optimization Theory and Methods: Nonlinear Programming, vol. 1, Springer, New York, NY, 2006. Search in Google Scholar

[7] J. Nocedal and S. J. Wright, Numerical Optimization, Springer, New York, NY, 2006. Search in Google Scholar

[8] W. LaCruz, J. Martiiinez, and M. Raydan, Spectral residual method without gradient information for solving large-scale nonlinear systems of equations, Math. Comput. 75 (2006), no. 255, 1429–1448. 10.1090/S0025-5718-06-01840-0Search in Google Scholar

[9] M. Y. Waziri, W. J. Leong, M. A. Hassan, and M. Monsi, Jacobian computation-free Newton’s method for systems of non-linear equations, J. Numer. Math. Stochastic. 2 (2010), no. 1, 54–63. Search in Google Scholar

[10] H. Mohammad and M. Y. Waziri, On Broyden-like update via some quadratures for solving nonlinear systems of equations, Turkish J. Math. 39 (2015), no. 3, 335–345. 10.3906/mat-1404-41Search in Google Scholar

[11] A. H. Ibrahim, P. Kumam, A.Kamandi, and A. B. Abubakar, An efficient hybrid conjugate gradient method for unconstrained optimization, Optim. Meth. Software (2022), 1–14. 10.1080/10556788.2021.1998490.Search in Google Scholar

[12] W. W. Hager and H. Zhang, A survey of nonlinear conjugate gradient methods, Pacific J Optim. 2 (2006), no. 1, 35–58. Search in Google Scholar

[13] G. Yuan, X. Wang, and Z. Sheng, Family weak conjugate gradient algorithms and their convergence analysis for nonconvex functions, Numer. Algorithms. 84 (2020), no. 3, 935–956. 10.1007/s11075-019-00787-7Search in Google Scholar

[14] G. Yuan, J. Lu, and Z. Wang, The PRP conjugate gradient algorithm with a modified WWP line search and its application in the image restoration problems, Appl. Numer. Math. 152 (2020), 1–11. 10.1016/j.apnum.2020.01.019Search in Google Scholar

[15] G. Yuan, Z. Wei, and Y. Yang, The global convergence of the Polak-Ribière-Polyak conjugate gradient algorithm under inexact line search for nonconvex functions, J. Comput. Appl. Math. 362 (2019), 262–275. 10.1016/j.cam.2018.10.057Search in Google Scholar

[16] G. Yuan, X. Wang, and Z. Sheng, The projection technique for two open problems of unconstrained optimization problems, J. Optim. Theory Appl. 186 (2020), no. 2, 590–619. 10.1007/s10957-020-01710-0Search in Google Scholar

[17] A. B. Abubakar, M. Malik, P. Kumam, H. Mohammad, M. Sun, A. H. Ibrahim, and A. I. Kiri, A liu-storey-type conjugate gradient method for unconstrained minimization problem with application in motion control, J King Saud Univ-Sci. 34 (2022), no. 4, 101923. 10.1016/j.jksus.2022.101923Search in Google Scholar

[18] A. B. Abubakar, P. Kumam, M. Malik, and A. H. Ibrahim, A hybrid conjugate gradient based approach for solving unconstrained optimization and motion control problems, Mathematics and Computers in Simulation 201 (2021), 640–657.10.1016/j.matcom.2021.05.038Search in Google Scholar

[19] H. Mohammad, A diagonal PRP-type projection method for convex constrained nonlinear monotone equations, J. Industr. Manag. Optim. 17 (2021), no. 1, 101–116. DOI: https://doi.org/10.3934/jimo.2019101. 10.3934/jimo.2019101Search in Google Scholar

[20] C. Wang, Y. Wang, and C. Xu, A projection method for a system of nonlinear monotone equations with convex constraints, Math. Meth. Operat. Res. 66 (2007), no. 1, 33–46. 10.1007/s00186-006-0140-ySearch in Google Scholar

[21] M. V. Solodov and B. F. Svaiter, A globally convergent inexact Newton method for systems of monotone equations, Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods, Springer, Boston, MA, 1998, pp. 355–369. 10.1007/978-1-4757-6388-1_18Search in Google Scholar

[22] F. Ma and C. Wang, Modified projection method for solving a system of monotone equations with convex constraints, J. Appl. Math. Comput. 34 (2010), no. 1, 47–56. 10.1007/s12190-009-0305-ySearch in Google Scholar

[23] L. Zhang and W. Zhou, Spectral gradient projection method for solving nonlinear monotone equations, J. Comput. Appl. Math. 196 (2006), no. 2, 478–484. 10.1016/j.cam.2005.10.002Search in Google Scholar

[24] Z. Yu, J. Lin, J. Sun, Y. H. Xiao, L. Liu, and Z. H. Li, Spectral gradient projection method for monotone nonlinear equations with convex constraints, Appl. Numer. Math. 59 (2009), no. 10, 2416–2423. 10.1016/j.apnum.2009.04.004Search in Google Scholar

[25] Z. Dai and H. Zhu, A modified Hestenes-Stiefel-type derivative-free method for large-scale nonlinear monotone equations, Mathematics 8 (2020), no. 2, 168. 10.3390/math8020168Search in Google Scholar

[26] A. H. Ibrahim, P. Kumam, A. B. Abubakar, and J. Abubakar, A method with inertial extrapolation step for convex constrained monotone equations, J Inequalit. Appl. 2021 (2021), no. 1, 1–25. 10.1186/s13660-021-02719-3Search in Google Scholar

[27] A. H. Ibrahim, M. Kimiaei, and P. Kumam, A new black box method for monotone nonlinear equations, Optimization (2021), 1–19. 10.1080/02331934.2021.2002326.Search in Google Scholar

[28] A. H. Ibrahim, J. Deepho, A. B. Abubakar, and A. Adamu, A three-term Polak-Ribière-Polyak derivative-free method and its application to image restoration, Sci African 13 (2021), e00880. 10.1016/j.sciaf.2021.e00880Search in Google Scholar

[29] A. B. Abubakar, P. Kumam, A. H. Ibrahim, P. Chaipunya, and S. A. Rano, New hybrid three-term spectral-conjugate gradient method for finding solutions of nonlinear monotone operator equations with applications, Mathematics and Computers in Simulation 201 (2022), 670–683. 10.1016/j.matcom.2021.07.005Search in Google Scholar

[30] A. H. Ibrahim, P. Kumam, B. A. Hassan, A. B. Abubakar, and J. Abubakar, A derivative-free three-term Hestenes-Stiefel type method for constrained nonlinear equations and image restoration, Int. J. Comput. Math. 99 (2022), no. 5, 1041–1065. 10.1080/00207160.2021.1946043Search in Google Scholar

[31] A. B. Abubakar, P. Kumam, and A. H. Ibrahim, Inertial derivative-free projection method for nonlinear monotone operator equations with convex constraints, IEEE Access 9 (2021), 92157–92167. 10.1109/ACCESS.2021.3091906Search in Google Scholar

[32] A. H. Ibrahim, P. Kumam, A. B. Abubakar, and A. Adamu, Accelerated derivative-free method for nonlinear monotone equations with an application, Numer. Linear Algebra Appl. 29 (2022), e2424. 10.1002/nla.2424Search in Google Scholar

[33] A. H. Ibrahim and P. Kumam, Re-modified derivative-free iterative method for nonlinear monotone equations with convex constraints, Ain Shams Eng. J. 12 (2021), no. 2, 2205–2210. 10.1016/j.asej.2020.11.009Search in Google Scholar

[34] Y. Zheng and B. Zheng, Two new Dai-Liao-type conjugate gradient methods for unconstrained optimization problems, J. Optim. Theory Appl. 175 (2017), no. 2, 502–509. 10.1007/s10957-017-1140-1Search in Google Scholar

[35] P. S Stanimirović, B. Ivanov, S. Djordjević, and I. Brajević, New hybrid conjugate gradient and broyden-fletcher-goldfarb-shanno conjugate gradient methods, J. Optim. Theory Appl. 178 (2018), no. 3, 860–884. 10.1007/s10957-018-1324-3Search in Google Scholar

[36] M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems, IEEE J Selected Topics Signal Process 1 (2007), no. 4, 586–597. 10.1109/JSTSP.2007.910281Search in Google Scholar

[37] Y. Xiao and H. Zhu, A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing, J. Math. Anal. Appl. 405 (2013), no. 1, 310–319. 10.1016/j.jmaa.2013.04.017Search in Google Scholar

[38] J. Liu and S. Li, A projection method for convex constrained monotone nonlinear equations with applications, Comput. Math. Appl. 70 (2015), no. 10, 2442–2453. 10.1016/j.camwa.2015.09.014Search in Google Scholar

[39] J. Liu and Y. Feng, A derivative-free iterative method for nonlinear monotone equations with convex constraints, Numer. Algorithms. 82 (2019), no. 1, 245–262. 10.1007/s11075-018-0603-2Search in Google Scholar

[40] E. D. Dolan and J. J. Moré, Benchmarking optimization software with performance profiles, Math. Program. 91 (2002), no. 2, 201–213. 10.1007/s101070100263Search in Google Scholar

[41] D. L. Donoho, For most large underdetermined systems of linear equations the minimal ℓ1-norm solution is also the sparsest solution, Commun. Pure Appl. Math. 59 (2006), no. 6, 797–829. 10.1002/cpa.20132Search in Google Scholar

[42] D. L Donoho, Compressed sensing, IEEE Trans Inform Theory 52 (2006), no. 4, 1289–1306. 10.1109/TIT.2006.871582Search in Google Scholar

[43] E. Candes and J. Romberg, Sparsity and incoherence in compressive sampling, Inverse Problems 23 (2007), no. 3, 969. 10.1088/0266-5611/23/3/008Search in Google Scholar

[44] I. Daubechies, M. Defrise, and C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 57 (2004), no. 11, 1413–1457. 10.1002/cpa.20042Search in Google Scholar

[45] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM J. Imaging Sci. 2 (2009), no. 1, 183–202. 10.1137/080716542Search in Google Scholar

[46] Y. Xiao, Q. Wang, and Q. Hu, Non-smooth equations based method for ℓ1 -norm problems with applications to compressed sensing, Nonlinear Anal. Theory Methods Appl. 74 (2011), no. 11, 3570–3577. 10.1016/j.na.2011.02.040Search in Google Scholar

[47] A. H. Ibrahim, P. Kumam, A. B. Abubakar, and J. Abubakar, A descent three-term derivative-free method for signal reconstruction in compressive sensing, Carpathian J. Math. 38 (2022), no. 2, 431–443. Search in Google Scholar

[48] A. C Bovik, Handbook of Image and Video Processing, Academic Press, London, UK, 2010. Search in Google Scholar

[49] S. M. Lajevardi, Structural similarity classifier for facial expression recognition, Signal Image Video Process 8 (2014), no. 6, 1103–1110. 10.1007/s11760-014-0639-2Search in Google Scholar

[50] Y. Bing and G. Lin, An efficient implementation of Merrill’s method for sparse or partially separable systems of nonlinear equations, SIAM J Optim. 1 (1991), no. 2, 206–221. 10.1137/0801015Search in Google Scholar

[51] Y. Ding, Y. Xiao, and J. Li, A class of conjugate gradient methods for convex constrained monotone equations, Optimization 66 (2017), no. 12, 2309–2328. 10.1080/02331934.2017.1372438Search in Google Scholar

[52] A. H. Ibrahim, P. Kumam, A. B. Abubakar, W. Jirakitpuwapat, and J. Abubakar, A hybrid conjugate gradient algorithm for constrained monotone equations with application in compressive sensing, Heliyon 6 (2020), no. 3, e03466. 10.1016/j.heliyon.2020.e03466Search in Google Scholar PubMed PubMed Central

[53] W. La Cruz, A spectral algorithm for large-scale systems of nonlinear monotone equations, Numer. Algorithms 76 (2017), no. 4, 1109–1130. 10.1007/s11075-017-0299-8Search in Google Scholar

[54] G. Yu, S. Niu, and J. Ma, Multivariate spectral gradient projection method for nonlinear monotone equations with convex constraints, J. Industr. Manag. Optim. 9 (2013), no. 1, 117–129. 10.3934/jimo.2013.9.117Search in Google Scholar

Received: 2021-09-27
Revised: 2022-04-13
Accepted: 2022-08-12
Published Online: 2022-12-31

© 2022 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Regular Articles
  2. On some summation formulas
  3. A study of a meromorphic perturbation of the sine family
  4. Asymptotic behavior of even-order noncanonical neutral differential equations
  5. Unconditionally positive NSFD and classical finite difference schemes for biofilm formation on medical implant using Allen-Cahn equation
  6. Starlike and convexity properties of q-Bessel-Struve functions
  7. Mathematical modeling and optimal control of the impact of rumors on the banking crisis
  8. On linear chaos in function spaces
  9. Convergence of generalized sampling series in weighted spaces
  10. Persistence landscapes of affine fractals
  11. Inertial iterative method with self-adaptive step size for finite family of split monotone variational inclusion and fixed point problems in Banach spaces
  12. Various notions of module amenability on weighted semigroup algebras
  13. Regularity and normality in hereditary bi m-spaces
  14. On a first-order differential system with initial and nonlocal boundary conditions
  15. On solving pseudomonotone equilibrium problems via two new extragradient-type methods under convex constraints
  16. Local linear approach: Conditional density estimate for functional and censored data
  17. Some properties of graded generalized 2-absorbing submodules
  18. Eigenvalue inclusion sets for linear response eigenvalue problems
  19. Some integral inequalities for generalized left and right log convex interval-valued functions based upon the pseudo-order relation
  20. More properties of generalized open sets in generalized topological spaces
  21. An extragradient inertial algorithm for solving split fixed-point problems of demicontractive mappings, with equilibrium and variational inequality problems
  22. An accurate and efficient local one-dimensional method for the 3D acoustic wave equation
  23. On a weighted elliptic equation of N-Kirchhoff type with double exponential growth
  24. On split feasibility problem for finite families of equilibrium and fixed point problems in Banach spaces
  25. Entire and meromorphic solutions for systems of the differential difference equations
  26. Multiplication operators on the Banach algebra of bounded Φ-variation functions on compact subsets of ℂ
  27. Mannheim curves and their partner curves in Minkowski 3-space E13
  28. Characterizations of the group invertibility of a matrix revisited
  29. Iterates of q-Bernstein operators on triangular domain with all curved sides
  30. Data analysis-based time series forecast for managing household electricity consumption
  31. A robust study of the transmission dynamics of zoonotic infection through non-integer derivative
  32. A Dai-Liao-type projection method for monotone nonlinear equations and signal processing
  33. Review Article
  34. Remarks on some variants of minimal point theorem and Ekeland variational principle with applications
  35. Special Issue on Recent Methods in Approximation Theory - Part I
  36. Coupled fixed point theorems under new coupled implicit relation in Hilbert spaces
  37. Approximation of integrable functions by general linear matrix operators of their Fourier series
  38. Sharp sufficient condition for the convergence of greedy expansions with errors in coefficient computation
  39. Approximation of conic sections by weighted Lupaş post-quantum Bézier curves
  40. On the generalized growth and approximation of entire solutions of certain elliptic partial differential equation
  41. Existence results for ABC-fractional BVP via new fixed point results of F-Lipschitzian mappings
  42. Linear barycentric rational collocation method for solving biharmonic equation
  43. A note on the convergence of Phillips operators by the sequence of functions via q-calculus
  44. Taylor’s series expansions for real powers of two functions containing squares of inverse cosine function, closed-form formula for specific partial Bell polynomials, and series representations for real powers of Pi
  45. Special Issue on Recent Advances in Fractional Calculus and Nonlinear Fractional Evaluation Equations - Part I
  46. Positive solutions for fractional differential equation at resonance under integral boundary conditions
  47. Source term model for elasticity system with nonlinear dissipative term in a thin domain
  48. A numerical study of anomalous electro-diffusion cells in cable sense with a non-singular kernel
  49. On Opial-type inequality for a generalized fractional integral operator
  50. Special Issue on Advances in Integral Transforms and Analysis of Differential Equations with Applications
  51. Mathematical analysis of a MERS-Cov coronavirus model
  52. Rapid exponential stabilization of nonlinear continuous systems via event-triggered impulsive control
  53. Novel soliton solutions for the fractional three-wave resonant interaction equations
  54. The multistep Laplace optimized decomposition method for solving fractional-order coronavirus disease model (COVID-19) via the Caputo fractional approach
  55. Special Issue on Problems, Methods and Applications of Nonlinear Analysis
  56. Some recent results on singular p-Laplacian equations
  57. Infinitely many solutions for quasilinear Schrödinger equations with sign-changing nonlinearity without the aid of 4-superlinear at infinity
  58. Special Issue on Recent Advances for Computational and Mathematical Methods in Scientific Problems
  59. Existence of solutions for a nonlinear problem at resonance
  60. Asymptotic stability of solutions for a diffusive epidemic model
  61. Special Issue on Computational and Numerical Methods for Special Functions - Part I
  62. Fully degenerate Bernoulli numbers and polynomials
  63. Wigner-Ville distribution and ambiguity function associated with the quaternion offset linear canonical transform
  64. Some identities related to degenerate Stirling numbers of the second kind
  65. Two identities and closed-form formulas for the Bernoulli numbers in terms of central factorial numbers of the second kind
  66. λ-q-Sheffer sequence and its applications
  67. Special Issue on Fixed Point Theory and Applications to Various Differential/Integral Equations - Part I
  68. General decay for a nonlinear pseudo-parabolic equation with viscoelastic term
  69. Generalized common fixed point theorem for generalized hybrid mappings in Hilbert spaces
  70. Computation of solution of integral equations via fixed point results
  71. Characterizations of quasi-metric and G-metric completeness involving w-distances and fixed points
  72. Notes on continuity result for conformable diffusion equation on the sphere: The linear case
Downloaded on 12.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/dema-2022-0159/html
Scroll to top button