Startseite A reduced space branch and bound algorithm for a class of sum of ratios problems
Artikel Open Access

A reduced space branch and bound algorithm for a class of sum of ratios problems

  • Yingfeng Zhao EMAIL logo und Ting Zhao
Veröffentlicht/Copyright: 30. Mai 2018

Abstract

Sum of ratios problem occurs frequently in various areas of engineering practice and management science, but most solution methods for this kind of problem are often designed for determining local solutions . In this paper, we develop a reduced space branch and bound algorithm for globally solving sum of convex-concave ratios problem. By introducing some auxiliary variables, the initial problem is converted into an equivalent problem where the objective function is linear. Then the convex relaxation problem of the equivalent problem is established by relaxing auxiliary variables only in the outcome space. By integrating some acceleration and reduction techniques into branch and bound scheme, the presented global optimization algorithm is developed for solving these kind of problems. Convergence and optimality of the algorithm are presented and numerical examples taken from some recent literature and MINLPLib are carried out to validate the performance of the proposed algorithm.

MSC 2010: 90C26; 90C30

1 Introduction

Fractional programming occurs frequently in a variety of economic, industrial and engineering problems [1]. It is one of the most topical and useful fields in nonconvex optimization and many intensive and systematical researches have been done on fractional programming since the seminal works by Charnes and Cooper [2, 3]. Sum of ratios problem SRP is a special case of optimization among fractional programming [4,5], as a generalization of linear fractional programming which optimizes sum of linear ratios, the SRP has a broad range of applications. Included among those, are clustering problems [6], transportation planning [7], multi-stage stochastic shipping [8], finance and investment [9], and layered manufacturing problems [10, 11], to name but a few. The reader is referred to a survey [12] and a bibliography [13] to find many other applications. In this paper, we focus on the following sum of ratios problem:

SRP:minf(x)=i=1pδiϕi(x)ψi(x)s.t.hj(x)0,j=1,2,,mxX=[x_,x¯],

where each coefficient δi is a real number, functions ϕi(x), −ψi(x), i = 1, 2, …, p, hj(x), j = 1, 2, …, m are all convex (hence continuous). Furthermore, we assume that ψi(x) ≠ 0, ∀ xX. By the continuity of the denominators, we know that ψi(x) must satisfy ψi(x) > 0 or ψi(x) < 0. Based on the discussion in [14] and for the sake of simplicity, we only need to consider a special case of the SRP, that is, the numerator and the denominator of each ratio in the objective function of the SRP satisfies the following condition:

ϕi(x)0,ψi(x)>0,xX,i=1,2,,p.(1)

The SRP has attracted the interest of quite a lot of researchers and practitioners for many years which is at least in part due to the difficulty associated with the existence of multiple local solution that are not optimally global. Actually, Charnes and Cooper have proved that the optimization of a single linear ratio is equivalent to a linear program and hence it can be solved in polynomial time [2, 15]. But this is not true for the SRP where the objective function is a sum of p (p ≥ 2) nonlinear (even linear) ratios, due to some inherent difficulties, there are many theoretical and computational challenge for finding the global optimizer of the SRP. During the past several years, some feasible algorithms have already been proposed for the SRP and its special forms, for instance, Konno et al. presented a parametric simplex method and an efficient heuristic algorithm for globally solving the sum of linear fractional problems and its special case [17, 18], but their algorithm can only solve sum of linear ratios, and the problem must have three ratios. Falk and Palocasy put forward an approach based on the image space analysis for globally solving sum of affine ratios problem [19], wherein they identify classes of nonconvex problems involving either sums or products of ratios of linear terms which may be treated by analysis in a transformed space. In each class, the image space is defined by a mapping which associates a new variable with each original ratio of linear terms. In the image space, optimization is easy in certain directions, and the overall solution may be realized by sequentially optimizing in these directions, this algorithm has good performance, but the problem they considered can only have linear constraints; Pei and Zhu present a branch and bound algorithm by converting it into a D.C programming [20], their algorithm performs well when the number of variables is not so big. In addition to this, Shen and Wang developed two kinds of branch-reduction-bound algorithms for sum of linear ratios problem [21, 22], both of these branch and bound algorithms branch in the variable space, with the increase of the number of variables, the performance of the algorithm will decline sharply; Jiao and Liu presented a practical outcome space branch and bound algorithm for globally maximizing sum of linear ratios problem [14], this algorithm can effectively solve the sum of linear ratios problem with quite a lot of variables, but the branch operation occurs in the outcome space of the reciprocal of the denominator. Despite these various contributions, however, there is still no decisive method for globally solving general sum of ratios problem and thus efficient solution method for SRP is still an open issue.

In this study, we will present a reduced space branch and bound algorithm with practical accelerating techniques according to some properties (concavity, convexity and continuity) of the objective and constraint functions in SRP. The attractive properties of this algorithm is mainly embodied in the following three aspects. First, the problem we considered is more general and extensive than that in most of the above literatures . Second, the relaxation operation we used is quite concise and practical, the adapted subdivision and range reduction technique carried out in the outcome space can sharply reduce the number of nodes in the branching tree so as the execution efficiency of the algorithm is significantly improved. Finally, the global convergence property is proved and some numerical experiment and a random test is performed to illustrate the feasibility and robust property.

The remainder of this paper is organized in the following way. The next section shows how to construct the equivalent problem EP and the convex relaxation programming of the EP according to the concavity and convexity of the objective and constraint functions in SRP. The condensing, branching and bounding operations of the new algorithm are established in Section 3. The detailed statement and the global convergence property of the presented algorithm is put forward in Section 4. Section 5 is devoted to a report of computational comparison between our algorithm and some of the other algorithms that exist in the literature. Some concluding remarks are proposed in the last section.

2 Equivalent problem and relaxation programming

In this section, we will first transform the problem SRP into an equivalent problemEP by associating each ratio in the objective function in SRP with an additional variable which we call the outcome variable, and then our focus will be shifted to find the global optimal solution of the EP. By utilizing the special structure of the EP, a concise convex relaxation programming for the EP will be introduced with which we only need to branch in a reduced outcome space and at the same time, a new upper bound and lower bound of the optimal value will be obtained simultaneously at each iteration, so as to greatly reduce the workload of the calculation.

2.1 Equivalent problem

To solve the problem, we will first transform the SRP into an equivalent problem EP, where the objective function is linear and the constraint functions possess a special structure which is beneficial for constructing convex relaxing programming problems. To explain how such a reformulation is possible, we first introduce p auxiliary variables ti, i = 1, 2 ⋯, p, and for definiteness and without loss of generality, we assume that

  1. δi > 0, i = 1, 2 ⋯, p, when ϕi(x) is convex and ψi(x) is concave, i = 1, 2 ⋯, p.

  2. δi > 0, i = 1, 2 ⋯, T; δi < 0, i = T + 1, T + 2 ⋯, p, when ϕi(x) and ψi(x) are linear, i = 1, 2 ⋯, p.

Then denote

di=minxXψi(x);li0=minxXϕi(x)maxxXψi(x),ui0=maxxXϕi(x)minxXψi(x),i=1,2,,p,

note that we can obtain the values of di, li0 and ui0 easily by utilizing the convexity and concavity of the numerators and denominators and clearly we know that 0 li0ui0.

Next, we consider the following equivalent problem EP:

EP:ming(t)=i=1pδitis.t.ci(t)=ϕi(x)ti(ψi(x)di)+tidi0,i=1,2,,T,ci(t)=ϕi(x)ti(ψi(x)di)+tidi0,i=T+1,T+2,,p,hj(x)0,j=1,2,,m,xX,tD0.

where D0={tRpli0tiui0,i=1,2,,p} is called an outcome space corresponding to the feasible region of SRP, and soon we will show that problems SRP and EP are equivalent in the sense of the following theorem.

Theorem 2.1

xRnis a global optimal solution for the SRP if and only if (x, t) ∈ Rn+pis a global optimal solution for the EP, whereti=ϕi(x)ψi(x),i=1,2,,p..

Proof

Assume xRn is a global optimal solution for the SRP, let ti=ϕi(x)ψi(x),i=1,2,,p., then we have (x, t) ∈ Rn+p is a feasible solution for the EP, according the optimality of x in the SRP, we know that, for each xF

f(x)=i=1pδiϕi(x)ψi(x)f(x)=i=1pδiϕi(x)ψi(x)=i=1pδiti=g(t).(2)

In addition, if (x, t) is a feasible solution to the EP, we can obtain

g(t)=i=1Tδiti+i=T+1pδitii=1Tδiϕi(x)ψi(x)+i=T+1pδiϕi(x)ψi(x)=f(x).(3)

From the above conclusion (2) and (3), we know that (x, t) ∈ Rn+p is a global optimal solution for the EP. Conversely, if (x, t) ∈ Rn+p is a global optimal solution for the EP, we first prove that

ti=ϕi(x)ψi(x),i=1,2,,p.

If otherwise, according the feasibility of (x, t), one of the following two conclusions

ti>ϕi(x)ψi(x),for somei{1,2,,T},tjϕj(x)ψj(x),for allji

or

ti<ϕi(x)ψi(x),for somei{T+1,T+2,,p},tjϕj(x)ψj(x), for allji

must hold. Let t¯i=ϕi(x)ψi(x),i=1,2,,p, then we know (x, ) is a feasible solution and g(t¯)=i=1Tδit¯i+i=T+1pδit¯i<i=1Tδiti+i=T+1pδiti=g(t), it is a contradiction to the optimality of (x, t), hence we have

ti=ϕi(x)ψi(x),i=1,2,,p.

Furthermore, ∀ xF, let ti=ϕi(x)ψi(x),i=1,2,,p, then we have (x, t) ∈ Rn+p is a feasible solution for the EP, and by the optimality of (x, t), we have

f(x)=i=1pδiϕi(x)ψi(x)=i=1pδitii=1pδiti=i=1pδiϕi(x)ψi(x)=f(x),

that is to say, x is the global optimal solution of SRP, and this completes the proof.□

For solving problem SRP, according to the conclusion of Theorem 2.1, we only need to consider how to solve problem EP. To this end, we will make full use of the structure of the EP to establish the convex or linear relaxation problem of the EP for designing the presented algorithm. To keep things simple, we only consider the linear situation (case ii)), it can be easily extended to the nonlinear circumstances that satisfy condition i).

2.2 Relaxation technique

In this part, we concentrate on how to construct the linear relaxation programming problem of the EP on assumption that all functions appeared in the SRP are linear. Note that the objective function of problem EP is already a linear function, so we only need to consider the bilinear constraints. For simplicity, we denote D as the rectangle region generated by the branching operation, where D = D1 × D2 × ⋯ × Dp with Di = {tiR|litiui}, and F is a subset of the feasible region which appears in the branch operation, then we can put forward an approach for generating a linear underestimating function of the constraint function for problem EP, which is given by the following Theorem 2.1.

Theorem 2.2

For any (x, t) ∈ F × D, denote

c~i(t)=(ni)Txj=1nθijdjixj+βi+tiγi,i=1,2,,p,(4)

where

θij=ui,dj0li,dj<0,i=1,2T;θij=li,dj0ui,dj<0,i=T+1,T+2p,

then we have

  1. The functioni(t) is a lower bounding function forci(t) over the regionF × D.

  2. Functioni(t) will approximate eachci(t), (i = 1, 2, ⋯, p) as |uili| → 0 that is |i(t) − ci(t)| → 0, as |uili| → 0.

Proof

  1. For any (x, t) ∈ F × D, by the definition of ci(t) and i(t), we have

    ci(t)=ϕi(x)tiψi(x)=(ni)Tx+βiti(dixγi)=(ni)Txj=1ntidjixj+tiγi+βi(ni)Txj=1nθijdjixj+βi+tiγi=c~i(t).

    So the conclusion (i) holds.

  2. For any (x, t) ∈ F × D, by the definition of ci(t) and i(t), we have

    |c~i(t)ci(t)|=|nixj=1ntidjixj+tiγi+βi(nixj=1nθijdjixj+βi+tiγi)|=|j=1nθijdjixjj=1ntidjixj|=|j=1n(θijti)djixj||j=1ndjixj||uili|Mi|uili|.(5)

    Where Mi is an upper bound of | j=1ndjixj |, it existence can be easily obtained by the continuity of affine function j=1ndjixj over a compact region. From conclusion (5), it is easy to see that

    |c~i(t)ci(t)|0,as|uili|0.

Thus the proof is complete.□

Therefore, according to the above discussion, we can obtain the linear relaxation programming problem REPD corresponding to the outcome space D of EPD as follows:

REPD:ming(t)=i=1pδitis.t.c~i(t)0,i=1,2,,T,c~i(t)0,i=T+1,T+2,,p,xF,tD,

where i(t) is defined by (5), and from now on, we will use the symbol EPD to express the problem EP corresponding to the outcome space D, and in the rest of this paper, any symbol similar to this should be understood in the same meaning.

Based on the construct process of the REPD, it is not hard to find that every feasible solution for EPD is also a feasible solution of the REPD, but its optimal value is not less than that of the REPD, thus the REPD can provide a valid lower bound for the optimal value of problem EPD and problem REPD will approximate the EPD as max1ip|uili|0 which is indicated by Theorem 2.2

3 Key operations for algorithm design

To present the reduced space branch and bound algorithm for solving the SRP, we will describe three fundamental operations: branching, condensing and bounding, in this section.

3.1 Branching operation

In this paper, we adopt the so-called adapted partition technique to subdivide the initial box D0 into sub-boxes. The adapted partition operation performs in a reduced outcome space associated with problem EP other than n-dimensional variable space, this is the place where is different from the general branch and bound algorithm performed in variable space. For any subset D = {tRp|litiui} ⊂ D0, the specific division procedure is given as follows.

Partition regulation

  1. Let r ∈ argmax {uili|i = 1, 2, ⋯, p}.

  2. Let θr = (1 − α)lr + αur.

  3. Subdivide Dr into two intervals Dr1 and Dr2 , where Dr1 = [lr, θr] and Dr2 = [θr, ur], then let

    D=D1×D2××Dr1×Dr1×Dr+1××Dp,

    and

    D=D1×D2××Dr1×Dr2×Dr+1××Dp.

Thus, region D is divided into two new hyper-rectangles D′ and D″.

It can be seen from the above partition regulation that only the p-dimensional outcome space is partitioned in the algorithm, the n-dimensional variable space was never divided, this is just the place where our algorithm is different from the usual branch and bound algorithm, and immediately, we will see that this operation will make the algorithm quite efficient for special scaled problem where the number of the ratios in the objective function is far less than that of the variables.

3.2 Condensing and bounding technique

For any rectangle DkD0 generated by the branching operation in the k-th iteration, the condensing operation consists in reducing the current partition still of interest by incising the part which does not contain the global optimal solution for problem SARPD0; The bounding operation aims at estimating an upper and (or) lower bound of the optimal objective value of the EP and removing the subregion which doesn’t have further research value.

In the k-th iteration, first, we solve the linear relaxation programming problem REPDk, assume the optimal solution is (k, k), then let t~ik=ϕi(x¯k)ψi(x¯k), we can obtain a feasible solution (k, k) for problem (EPD0), and of course, the objective value of (k, k) is an upper bound for the optimal value of the (EPD0); Further more, the optimal value of the REPDk is a lower bound of the objective value of (EPDk), and the smallest optimal value of all subproblem in the k-th iteration is a lower bound for the optimal value of the (EPD0). Assume that is the best upper bound of the optimum of the REPD0 known so far, then the condensing technique can be described in the form of the following theorem:

Theorem 3.1

For any sub-regionDkD0, assumefminkis the optimal value of problem REPDk, then the following two conclusions hold:

  1. Iffmink > , thenDkdoesn’t contain the optimal solution of problem (EPD0), so it can be removed.

  2. Iffmink, then for eachj ∈ {1, 2, ⋯, T}, the regionkDkcan be incised; For eachj ∈ {T + 1, T + 2, ⋯, p}, the regionD̿kDkcan be incised, where

    D¯k=D1k×D2k××Dj1k×D¯jk×Dj+1k××DTk×DT+1k××Dpk,

    and

    D¯¯k=D1k×D2k××DTk×DT+1k××Dj1k×D¯¯jk×Dj+1k××Dpk,

    with

    rjk=1δjf¯fmink+ujk,j=1,2,,T,1δjf¯fmink+ljk,j=T+1,T+2,,p,

    and

    D¯jk=[ljk,rjk]Djk,D¯¯jk=[rjk,ujk]Djk.

Proof

  1. The conclusion is obvious, here is omitted.

  2. For simplicity’s sake, we denote

    M¯k=(x,t)F×D¯kc~i(t)0,i=1,2,,Tc~i(t)0,i=T+1,T+2,,p,

    and

    M¯¯k=(x,t)F×D¯¯kc~i(t)0,i=1,2,,Tc~i(t)0,i=T+1,T+2,,p,

    Since fmink, then for each j ∈ {1, 2, ⋯, T}, and (x, t) ∈ k, we have

    min(x,t)M¯ki=1pδitifmink+min(x,t)M¯kδjtjmin(x,t)M¯kδjtj>fmink+δjrjkδjujk=f¯,

    therefore, k doesn’t contain the global optimal solution for the (EPD0), and it can be incised.

    In the same way, when j ∈ {T + 1, T + 2, ⋯, p}, we have

    min(x,t)M¯ki=1pδitifmink+min(x,t)M¯¯kδjtjmin(x,t)M¯¯kδjtj>fmink+δjrjkδjljk=f¯,

    similarly, D̿k doesn’t contain the global optimal solution of the (EPD0), and it will be incised in the algorithm. □

By Theorem 3.1, the condensing operation can cut away a large part of current region in which the optimal solution doesn’t exist, so the rapid growth of the branching node can be suppressed from iteration to iteration. Additionally, unlike a normal branch and bound algorithm, the branching method used in this study can adjust the ratio of the partitions measurement by adopting different ratios, and thus the convergent speed of the algorithm can be enhanced.

4 Algorithm statement and convergence analysis

Based upon the above results and technique, the basic steps of the reduced space branch and bound algorithm associated with efficient accelerating techniques for globally solving the SRP will be summarized in this section.

4.1 Algorithm statement

By integrating the condensing technique and partition skills into the reduced space branch and bound scheme, the presented algorithm for the SRP can be described as follows.

  1. Set the convergence precision ϵ ≥ 0, iteration counter k = 0 and the partition ratio α ∈ [0, 1]. Compute the values of li0,ui0, for each i = 1, 2, ⋯, p, then detemine the optimal solution (x0, t0) and optimal value fmin by solving the linear relaxation programming problem REPD0. Let

    f_=fmin,ti=ϕi(x)ψi(x),i=1,2,,p,x=x0,

    clearly, (x*, t*) is a feasible solution for the (EPD0). Let f = g(t*) =i=1pδiti, if ffϵ, stop, and x*, f are the optimal solution and the optimal vale of the SRP, respectively; otherwise, set F = ϕ, k = 1,D1 = D0, the set of all partitions still of interest Θk = {D1}, then turn to step 1.

  2. For each rectangle DkD, incising the invalid part by the condensing technique described in section 3.2, substitute Dk with the remaining partition.

  3. Subdivide region Dk into two new regions Dk1 and Dk2 according to the ratio partition rule, express the collection of new partitions as k.

  4. Obtain optimal solutions (x, t) and optimal values fmaxkν, with ν ∈ {1, 2}, respectively, by solving problems REPDk. Then let

    t¯iν=ϕi(xkν)ψi(xkν),i=1,2,,p,ν{1,2},

    and update the upper bound by setting f = min{f, g(t*ν)}, and let x* be the feasible solution with the best objective value currently known. If fminkν > f, delete the node associated with D from Θk, if Θk = ϕ, stop, x*, f are the optimal solution and the optimal vale of the SRP, respectively; else, update the lower bound by setting fmin = min{fminkν}.

  5. If ffϵ, the algorithm can stop, at the same time, we can conclude that x* and f are the ϵ−global minimizer and minimum for the SRP, respectively. Otherwise, let k = k + 1 and return to step 1.

4.2 Convergence analysis

In this section, we illustrate the convergence property of the algorithm by the following theorem.

Theorem 4.1

The reduced space branch and bound algorithm described above either terminates within finitely many iterations and yield anϵglobal solution of the SRP, or generates an infinite sequence of feasible solutions with an accumulation as anϵglobal solution for the SRP.

Proof

If the algorithm terminates at the k-th iteration, upon termination criteria, it follows that ffϵ. From step 0 and step 3 in the algorithm, a feasible solution xk can be found to satisfy the following relation f(xk) − fϵ. At the same time, we have ffoptf(xk), where fopt is the optimal value for the (EPD0). Thus, taken the above relations together, it implies that

foptf(xk)f_+ϵfopt+ϵ,

we can conclude that xk is the optimal solution for the (EPD0), and of course also for the SRP.

If the algorithm is infinite and via solving the REPDk, generates an infinite feasible solution sequence {(xk, tk)}. Let t¯ik=ϕi(xk)ψi(xk), then {(xk, tk)} is a feasible solution sequence for the (EPD0). Since the sequence {xk} is bounded, it must have accumulations assuming limkxk = x* without loss of generality. On the other hand, we get

limkti¯k=limkϕi(xk)ψi(xk)=ϕi(x)ψi(x),(6)

by the continuity of ϕi(x) and ψi(x).

Also, according to the branching regulation described before, we know that

limklik=limkuik=ti,(7)

what’more, note that likϕi(xk)ψi(xk)uik, we conclude that

ti=limkϕi(xk)ψi(xk)=ϕi(x)ψi(x)=limkt¯k

by (6) and (7), therefore (x*, t*) is also a feasible solution for the (EPD0). Further more, since the lower bound sequence fk for the optimal value is increasing and lower bounded by the optimal value fopt in the algorithm, so combining the continuity of g(t), we have

limkf_k=g(t)foptlimkg(t¯k)=g(t).

That is, (x*, t*) is an optimal solution for the (EPD0), and of course x* is an optimal solution for the SRP according to the equality of problems SRP and (EPD0), therefore completing the proof. □

5 Numerical experiments

To test the proposed algorithm in efficiency and solution quality, we performed some computational examples on a personal computer containing an Intel Core i5 processor of 2.40 GHz and 4GB of RAM. The code base is written in Matlab 2014a and interfaces LINPROG for the linear relaxation subproblems and CVX for the convex relaxation subproblems.

We consider some numerical examples in recent literatures [14, 20, 21, 22, 23, 24, 25, 26], and a randomly generated test problem to verify the performance of the algorithm. The numerical test and results are listed as follows.

Example 5.1

([23]).

{maxx1+2x2+23x14x2+5+4x13x2+42x1+x2+3s.t.x1+x21.50,x1x20,0x11,0x21.

Example 5.2

([14, 22, 23]).

{max4x1+3x2+3x3+503x2+3x3+50+3x1+4x2+504x1+4x2+5x3+50+x1+2x2+5x3+50x1+5x2+5x3+50+x1+2x2+4x3+505x2+4x3+50s.t.2x1+x2+5x310,x1+6x2+3x310,5x1+9x2+2x310,9x1+7x2+3x310,x10,x20,x30.

Example 5.3

([14]).

{max0.9×x1+2x2+23x14x2+50.1×4x13x2+42x1+x2+3s.t.x1+x21.50,x1x20,0x11,0x21.

Example 5.4

([21]).

{max3x1+4x2+503x1+5x2+4x3+503x1+5x2+3x3+505x1+5x2+4x3+50x1+2x2+4x3+505x2+4x3+504x1+3x2+3x3+503x2+3x3+50s.t.6x1+3x2+3x310,10x1+3x2+8x310,x10,x20,x30.

Example 5.5

([14, 24]).

{max37x1+73x2+1313x1+13x2+13+63x118x2+3913x1+26x2+13s.t.5x13x2=3,1.5x13.

Example 5.6

([20, 23]).

{max3x1+5x2+3x3+503x1+4x2+5x3+50+3x1+4x2+504x1+3x2+2x3+50+4x1+2x2+4x3+505x1+4x2+3x3+50s.t.6x1+3x2+3x310,10x1+3x2+8x310,x10,x20,x30.

Example 5.7

([14, 24]).

{maxi=15cix+ridix+sis.t.Axb,x0.

where

b=(15.7,31.8,36.4,38.5,40.3,10.0,89.8,5.8,2.7,16.3,14.6,72.7,57.7,34.5,69.1)T
A=(1.82.20.84.13.82.30.82.51.60.24.51.84.62.01.43.24.23.31.90.70.84.44.42.03.72.83.22.03.73.33.50.71.53.14.51.10.60.62.54.10.63.32.80.14.13.21.24.31.81.64.51.34.63.34.21.21.92.43.42.90.54.11.73.90.13.91.51.62.32.33.23.90.31.71.34.70.93.90.51.23.80.60.21.50.54.23.60.64.81.50.30.63.60.23.82.80.13.34.32.44.11.71.03.34.43.71.11.40.62.22.51.34.32.94.12.70.82.93.51.24.31.94.02.61.82.50.61.34.32.34.11.10.00.44.54.41.23.81.91.23.01.10.22.50.11.72.91.54.70.34.24.43.94.44.71.03.81.44.71.93.83.51.52.33.74.22.70.10.20.14.90.90.14.31.62.61.51.00.81.6)
c1=(0.0,0.1,0.3,0.3,0.5,0.5,0.8,0.4,0.4,0.2,0.2,0.1),r1=14.6c2=(0.2,0.5,0.0,0.4,0.1,0.6,0.1,0.2,0.2,0.1,0.2,0.3),r2=7.1c3=(0.1,0.3,0.0,0.10.1,0.0,0.3,0.2,0.0,0.3,0.5,0.3),r3=1.7c4=(0.1,0.5,0.1,0.10.2,0.5,0.6,0.7,0.5,0.7,0.1,0.1),r4=4.0c5=(0.7,0.5,0.1,0.20.1,0.3,0.0,0.1,0.2,0.6,0.5,0.2),r5=6.8d1=(0.3,0.1,0.1,0.1,0.1,0.4,0.2,0.2,0.4,0.2,0.4,0.3),s1=14.2d2=(0.0,0.1,0.1,0.3,0.30.2,0.3,0.0,0.4,0.5,0.3,0.1),s2=1.7d3=(0.8,0.4,0.7,0.4,0.4,0.5,0.2,0.8,0.5,0.6,0.2,0.6),s3=8.1d4=(0.0,0.6,0.3,0.3,0.0,0.2,0.3,0.6,0.2,0.5,0.8,0.5),s4=26.9d5=(0.4,0.2,0.2,0.9,0.5,0.1,0.3,0.8,0.2,0.6,0.2,0.4),s5=3.7

Example 5.8

([25]).

{minx1+x2+x3s.t.833.33252x4x1x6+100x611250x51250x4x2x7+x4x7112500002500x5x3x8+x5x810.0025x4+0.0025x610.0025x4+0.0025x5+0.0025x710.01x80.01x51100x1100001000x2,x31000010xi1000,i=4,5,8.

Example 5.9

([26]).

{min0.5(x110)x2x1s.t.x2x3+x1+0.5x1x31001xi100,i=1,2,3.

Example 5.10

(Random test).

{maxi=1pδi(ni)Tx+βi(di)Tx+γis.t.Axb,x0.

where the elements of the matrixARm×n, bRm, ni, diRnand the elements of constant terms of denominators and numeratorsβiandγiRare randomly generated in the interval [0, 1], this agrees with the way random numbers are generated in [14], while in our experimentδiRis randomly generated in the interval[1, 1] rather than in interval [0, 1], this is much more challenging to test the performance of the algorithm. The results of the contrast experiments and the random tests are shown in Tables 1-3, and each symbol used in the table has the following meaningp, mandnrepresent the number of affine ratios in the objective function, the number of the constraints and the number of constrained variable respectively; Ave.time, Ave.Nod and Ave.Ite stand for the average CPU time in seconds, average number of the subproblem and iteration in the algorithm;ϵexpress the error precision used in the algorithm, andαrefer to the split ratio used in the branching operations.

Table 1

Results of the numerical contrast test 1-7.

ExampleMethodsϵαOptimal valueOptimal solutionIter
1[21]1e-8-(0.0, 0.283935547)1.62318335871
our1e-90.5(0.0, 0.2840)1.6231810
2[18]1e-9-(1.1111, 0.0000, 0.0000)4.09071289
[20]1e-6-(1.1111, 1.365e$-5, 1.351e$-5)4.08148139
[21]1e-5-(0.0013, 0.0000, 0.0000)4.0874121640
our1e-90.5(1.1111, 0.0000, 0.0000)4.090701
3[18]1e-6-(0, 1)3.5751
our1e-90.5(0.00000, 1.00000)3.575001
4[19]1e-6-(-1.838e$-16, 3.3333, 0.0)1.98
our1e-90.5(0.00000, 3.33333, 0.00000)-1.91
5[18]1e-6-(3, 4)51
[22]1e-2-(3, 4)511
our1e-90.3(3.00000, 4.00000)5.00002
6[17]1e-6-(0.00, 1.6725, 0.0000)3.00091033
[21]1e-2-(0.00, 3.3333, 0.0)3.00292119
our1e-90.5(0.00, 3.33333, 0.00000)3.002921
7[18]1e-3-x(18)16.2619927
[22]1e-2-x(22)16.077978620
our1e-30.65x(our1)16.26283626
8[23]1e-6-x(23)7049.24682-
our1e-60.50x(our2)6944.24803135
9[24]1e-6-x(24)-83.249728-
our1e-60.50x(our3)-85.6885958

Table 2

Computational results of random test 8 corresponding to the variation of the number of variable n.

(p, m, n)Ave.IteAve.NodAve.Time(s)
(2, 10, 10)23310.507
(2, 10, 20)30420.729
(2, 10, 30)36531.327
(2, 10, 40)47661.465
(2, 10, 50)741052.323
(2, 10, 60)63881.942
(2, 10, 70)57791.799
(2, 10, 80)54771.707
(2, 10, 90)63912.125
(2, 10, 100)1752695.567

Table 3

Computational results of random test 8 corresponding to the variation of the number of ratio p.

(p, m, n)Ave.IteAve.NodAve.Time(s)
(2, 10, 100)1752695.567
(3, 10, 90)2975119.056
(4, 10, 80)37666010.211
(5, 10, 70)59897318.937
(6, 10, 60)42357508128.986
(7, 10, 50)46548274130.364
(8, 10, 40)54209318150.624
(9, 10, 30)733811782217.592
(10, 10, 20)1184420473318.62

while

x(18)=(6.24409,20.0249,3.79672,5.93972,0,7.43852,0,23.2833,0.515015,40.9896,0,3.14363)Tx(22)=(6.223689,20.060317,3.774684,5.947841,0,7.456686,0,23.312579,0.000204,41.031824,0,3.171106)Tx(23)=(578.973143,1359.572730,5110.701048,181.9898,295.5719,218.0101,286.4179,395.5719)x(24)=(87.614446,8.754375,1.413643,19.311410)x(our1)=(6.22442,20.05821,3.77441,5.94859,0.00001,7.45691,0.00002,23.31133,0.00012,41.03002,0.00001,3.17225)Tx(our2)=(579.326059,1359.9445,5109.977472,182.019317,295.600901,217.980682,286.418416,395.600901)x(our3)=(87.614446,8.754375,1.413643,19.311410).

The computational results in Table 2 and Table 3 indicate that our algorithm has good performance, and is effective for special relatively large-scale optimization problems where the number of ratios in the objective function is not so large. Meanwhile, we find that, the average number of iterations and subproblems that need to be solved by the algorithm and the average CPU time do not substantially increase as the size of the problem becomes large. Based on the result of the above numerical examples, our algorithm is quite robust and efficient and so it can be used successfully to solve the sum of affine ratios problem SRP.

6 Concluding remarks

In this paper, a new kind of branch and bound optimization algorithm is presented for globally solving a class of sum of ratios problem. The algorithm is divided into three steps. First, the original problem is tactfully reformulated into an equivalent problem coupled with an outcome space, then the convex relaxation programming is established by utilizing the lower and upper bound of the auxiliary variables. At last, a new condensing operation based on the lower bound of the optimal value is presented for inciting the whole or a part of the investigated region in which there does not contain the global optimal solution of the equivalent problem. By combining the adapted partition rule with the accelerating technique into the reduced space branch and bound scheme, the presented algorithm is developed. Numerical results show that the proposed algorithm can suppress the rapid growth of the branching tree during the algorithm search process, and several random examples illustrate the high efficiency and stability of the algorithm.

  1. Competing interests: The authors declare that there is no conflict of interest regarding the publication of this paper.

  2. Author’s contributions: Both authors contributed equally to the manuscript, and they read and approved the final manuscript.

Acknowledgement

This paper is supported by the Science and Technology Key Project of Education Department of Henan Province (14A110024) and (15A110023);the National Natural Science Foundation of Henan Province (152300410097), the Science and Technology Projects of Henan Province (182102310941) the Cultivation Plan of Young Key Teachers in Colleges and Universities of Henan Province (2016GGJS-107), the Higher School Key Scientific Research Projects of Henan Province (18A110019, 17A110021), the Major Scientific Research Projects of Henan Institute of Science and Technology (2015ZD07), and thanks for all the references authors.

References

[1] Stancu-Minasian I.M., Fractional Programming, Kluwer Academic Publishers, Boston, 1997.10.1007/978-94-009-0035-6Suche in Google Scholar

[2] Charnes A., Cooper W.W., Programming with linear fractional functionals, Nav. Res. Log. Q., 1962, 9, 181-186.10.1002/nav.3800090303Suche in Google Scholar

[3] Host R., Pardalos P.M., Handbook of Global Optimization, Kluwer Acdemic Publishers, Dordrecht, 1995, 495-608.10.1007/978-1-4615-2025-2Suche in Google Scholar

[4] Jiao H.W., Liu S.Y., Range division and compression algorithm for quadratically constrained sum of quadratic ratios, Comput. Appl. Math., 2017, 36(1), 225-247.10.1007/s40314-015-0224-5Suche in Google Scholar

[5] Jiao H.W., Liu S.Y., Yin J., Zhao Y., Outcome space range reduction method for global optimization of sum of affine ratios problem, Open Math., 2016, 14, 736-746.10.1515/math-2016-0058Suche in Google Scholar

[6] Rao M.R., Cluster analysis and mathematical programming, J. Am. Stat. Assoc., 1971, 66, 622-626.10.1080/01621459.1971.10482319Suche in Google Scholar

[7] Flak J.E., Palocsay S.W., Optimizing the sum of linear fractional functions, Recent advances in global optimization, Princeton Univerisity Press, Princeton, New Jersey, 1992.10.1515/9781400862528.221Suche in Google Scholar

[8] Almogy Y., Levin O., Parametric analysis of a multi-stage stochastic shipping problem, Proc. of the fifth IFORS Conf., 1964, 359-370.Suche in Google Scholar

[9] Konno H., Watanabe H., Bond portfolio optimization problems and their applications to rex tracking, J. Oper. Res. Soc. Jpn., 1996, 39, 295-306.10.15807/jorsj.39.295Suche in Google Scholar

[10] Majihi J., Janardan R., Smid M., Gupta P., On some geometric optimization problems in layered manufacturing, Comp. Geom., 1999, 12, 219-239.10.1016/S0925-7721(99)00002-4Suche in Google Scholar

[11] Schwerdt J., Smid M., Janardan R., Johnson E., Majihi J., Protecting critical facets in layered manufacturing, Comp. Geom., 2000, 16, 187-210.10.1016/S0925-7721(00)00008-0Suche in Google Scholar

[12] Schaible S., Shi J., Fractional programming: the sum-of-ratios case, Optim. Method Softw., 2003, 18, 219-229.10.1080/1055678031000105242Suche in Google Scholar

[13] Stancu-Minasian I.M., A sixth bibliography of fractional programming, Optimization, 2006, 55, 405-428.10.1080/02331930600819613Suche in Google Scholar

[14] Jiao H.W., Liu S.Y., A practicable branch and bound algorithm for sum of linear ratios problem, Eur. J. Oper. Res., 2015, 243, 723-730.10.1016/j.ejor.2015.01.039Suche in Google Scholar

[15] Karmarkar N., A new polynomial-time algorithm for linear programming, Combinatorica, 1984, 4, 373-395.10.1007/BF02579150Suche in Google Scholar

[16] Cambini A., Martein L., Schaible S., On Maximizing a sum of ratios, J. Info. Optim. Scie., 1989, 10, 65-79.10.1080/02522667.1989.10698952Suche in Google Scholar

[17] Konno H., Abe N., Minimization of the sum of three linear fractional functions, J. Global Optim., 1999, 15, 419-432.10.1023/A:1008376731013Suche in Google Scholar

[18] Konno H., Yajima Y., Matsui T., Parametric simplex algoriyhm for solving a special class of nonconvex minimization problems, J. Global Optim., 1991, 1, 65-81.10.1007/BF00120666Suche in Google Scholar

[19] Falk J.E., Palocsay S.W., Image space analysis of generalized fractional programs, J. Global Optim., 1994, 4, 63-88.10.1007/BF01096535Suche in Google Scholar

[20] Pei Y.G., Zhu D.T., Global optimization method for maximizing the sum of difference of convex functions ratios over nonconvex region, J. Appl. Math. Comput., 2013, 41, 153-169.10.1007/s12190-012-0602-8Suche in Google Scholar

[21] Shen P.P., Wang C.F., Global optimization for sum of linear ratios problem with coefficients, Appl. Math. Comp., 2006, 176, 219-229.10.1016/j.amc.2005.09.047Suche in Google Scholar

[22] Wang Y.J., Shen P.P., Liang Z.A., A branch-and-bound algorithm to globally solve the sum of several linear ratios, Appl. Math. Comp., 2005, 168, 89-101.10.1016/j.amc.2004.08.016Suche in Google Scholar

[23] Jiao H.W., Liu S.Y., An Efficient Algorithm for Quadratic Sum-of-Ratios Fractional Programs Problem, Numer. Funct. Anal. Optim., 2017, 38(11), 1426-1445.10.1080/01630563.2017.1327869Suche in Google Scholar

[24] Phuong N.T.H., Tuy H., A unified monotonic approach to generalized linear fractional programming, J. Global Optim., 2003, 26, 229-259.10.1023/A:1023274721632Suche in Google Scholar

[25] Lin M.H., Tsai J.F., Range reduction techniques for improving computational efficiency in global optimization of signomial geometric programming problems, Eur. J. Oper. Res., 2012, 216(1), 17-25.10.1016/j.ejor.2011.06.046Suche in Google Scholar

[26] Dembo R.S., Avriel M., Optimal design of a membrane separation process using signomial programming, Math. Prog., 1978, 15(1), 12-25.10.1007/BF01608996Suche in Google Scholar

Received: 2018-02-08
Accepted: 2018-04-04
Published Online: 2018-05-30

© 2018 Zhao and Zhao, published by De Gruyter

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.

Artikel in diesem Heft

  1. Regular Articles
  2. Algebraic proofs for shallow water bi–Hamiltonian systems for three cocycle of the semi-direct product of Kac–Moody and Virasoro Lie algebras
  3. On a viscous two-fluid channel flow including evaporation
  4. Generation of pseudo-random numbers with the use of inverse chaotic transformation
  5. Singular Cauchy problem for the general Euler-Poisson-Darboux equation
  6. Ternary and n-ary f-distributive structures
  7. On the fine Simpson moduli spaces of 1-dimensional sheaves supported on plane quartics
  8. Evaluation of integrals with hypergeometric and logarithmic functions
  9. Bounded solutions of self-adjoint second order linear difference equations with periodic coeffients
  10. Oscillation of first order linear differential equations with several non-monotone delays
  11. Existence and regularity of mild solutions in some interpolation spaces for functional partial differential equations with nonlocal initial conditions
  12. The log-concavity of the q-derangement numbers of type B
  13. Generalized state maps and states on pseudo equality algebras
  14. Monotone subsequence via ultrapower
  15. Note on group irregularity strength of disconnected graphs
  16. On the security of the Courtois-Finiasz-Sendrier signature
  17. A further study on ordered regular equivalence relations in ordered semihypergroups
  18. On the structure vector field of a real hypersurface in complex quadric
  19. Rank relations between a {0, 1}-matrix and its complement
  20. Lie n superderivations and generalized Lie n superderivations of superalgebras
  21. Time parallelization scheme with an adaptive time step size for solving stiff initial value problems
  22. Stability problems and numerical integration on the Lie group SO(3) × R3 × R3
  23. On some fixed point results for (s, p, α)-contractive mappings in b-metric-like spaces and applications to integral equations
  24. On algebraic characterization of SSC of the Jahangir’s graph 𝓙n,m
  25. A greedy algorithm for interval greedoids
  26. On nonlinear evolution equation of second order in Banach spaces
  27. A primal-dual approach of weak vector equilibrium problems
  28. On new strong versions of Browder type theorems
  29. A Geršgorin-type eigenvalue localization set with n parameters for stochastic matrices
  30. Restriction conditions on PL(7, 2) codes (3 ≤ |𝓖i| ≤ 7)
  31. Singular integrals with variable kernel and fractional differentiation in homogeneous Morrey-Herz-type Hardy spaces with variable exponents
  32. Introduction to disoriented knot theory
  33. Restricted triangulation on circulant graphs
  34. Boundedness control sets for linear systems on Lie groups
  35. Chen’s inequalities for submanifolds in (κ, μ)-contact space form with a semi-symmetric metric connection
  36. Disjointed sum of products by a novel technique of orthogonalizing ORing
  37. A parametric linearizing approach for quadratically inequality constrained quadratic programs
  38. Generalizations of Steffensen’s inequality via the extension of Montgomery identity
  39. Vector fields satisfying the barycenter property
  40. On the freeness of hypersurface arrangements consisting of hyperplanes and spheres
  41. Biderivations of the higher rank Witt algebra without anti-symmetric condition
  42. Some remarks on spectra of nuclear operators
  43. Recursive interpolating sequences
  44. Involutory biquandles and singular knots and links
  45. Constacyclic codes over 𝔽pm[u1, u2,⋯,uk]/〈 ui2 = ui, uiuj = ujui
  46. Topological entropy for positively weak measure expansive shadowable maps
  47. Oscillation and non-oscillation of half-linear differential equations with coeffcients determined by functions having mean values
  48. On 𝓠-regular semigroups
  49. One kind power mean of the hybrid Gauss sums
  50. A reduced space branch and bound algorithm for a class of sum of ratios problems
  51. Some recurrence formulas for the Hermite polynomials and their squares
  52. A relaxed block splitting preconditioner for complex symmetric indefinite linear systems
  53. On f - prime radical in ordered semigroups
  54. Positive solutions of semipositone singular fractional differential systems with a parameter and integral boundary conditions
  55. Disjoint hypercyclicity equals disjoint supercyclicity for families of Taylor-type operators
  56. A stochastic differential game of low carbon technology sharing in collaborative innovation system of superior enterprises and inferior enterprises under uncertain environment
  57. Dynamic behavior analysis of a prey-predator model with ratio-dependent Monod-Haldane functional response
  58. The points and diameters of quantales
  59. Directed colimits of some flatness properties and purity of epimorphisms in S-posets
  60. Super (a, d)-H-antimagic labeling of subdivided graphs
  61. On the power sum problem of Lucas polynomials and its divisible property
  62. Existence of solutions for a shear thickening fluid-particle system with non-Newtonian potential
  63. On generalized P-reducible Finsler manifolds
  64. On Banach and Kuratowski Theorem, K-Lusin sets and strong sequences
  65. On the boundedness of square function generated by the Bessel differential operator in weighted Lebesque Lp,α spaces
  66. On the different kinds of separability of the space of Borel functions
  67. Curves in the Lorentz-Minkowski plane: elasticae, catenaries and grim-reapers
  68. Functional analysis method for the M/G/1 queueing model with single working vacation
  69. Existence of asymptotically periodic solutions for semilinear evolution equations with nonlocal initial conditions
  70. The existence of solutions to certain type of nonlinear difference-differential equations
  71. Domination in 4-regular Knödel graphs
  72. Stepanov-like pseudo almost periodic functions on time scales and applications to dynamic equations with delay
  73. Algebras of right ample semigroups
  74. Random attractors for stochastic retarded reaction-diffusion equations with multiplicative white noise on unbounded domains
  75. Nontrivial periodic solutions to delay difference equations via Morse theory
  76. A note on the three-way generalization of the Jordan canonical form
  77. On some varieties of ai-semirings satisfying xp+1x
  78. Abstract-valued Orlicz spaces of range-varying type
  79. On the recursive properties of one kind hybrid power mean involving two-term exponential sums and Gauss sums
  80. Arithmetic of generalized Dedekind sums and their modularity
  81. Multipreconditioned GMRES for simulating stochastic automata networks
  82. Regularization and error estimates for an inverse heat problem under the conformable derivative
  83. Transitivity of the εm-relation on (m-idempotent) hyperrings
  84. Learning Bayesian networks based on bi-velocity discrete particle swarm optimization with mutation operator
  85. Simultaneous prediction in the generalized linear model
  86. Two asymptotic expansions for gamma function developed by Windschitl’s formula
  87. State maps on semihoops
  88. 𝓜𝓝-convergence and lim-inf𝓜-convergence in partially ordered sets
  89. Stability and convergence of a local discontinuous Galerkin finite element method for the general Lax equation
  90. New topology in residuated lattices
  91. Optimality and duality in set-valued optimization utilizing limit sets
  92. An improved Schwarz Lemma at the boundary
  93. Initial layer problem of the Boussinesq system for Rayleigh-Bénard convection with infinite Prandtl number limit
  94. Toeplitz matrices whose elements are coefficients of Bazilevič functions
  95. Epi-mild normality
  96. Nonlinear elastic beam problems with the parameter near resonance
  97. Orlicz difference bodies
  98. The Picard group of Brauer-Severi varieties
  99. Galoisian and qualitative approaches to linear Polyanin-Zaitsev vector fields
  100. Weak group inverse
  101. Infinite growth of solutions of second order complex differential equation
  102. Semi-Hurewicz-Type properties in ditopological texture spaces
  103. Chaos and bifurcation in the controlled chaotic system
  104. Translatability and translatable semigroups
  105. Sharp bounds for partition dimension of generalized Möbius ladders
  106. Uniqueness theorems for L-functions in the extended Selberg class
  107. An effective algorithm for globally solving quadratic programs using parametric linearization technique
  108. Bounds of Strong EMT Strength for certain Subdivision of Star and Bistar
  109. On categorical aspects of S -quantales
  110. On the algebraicity of coefficients of half-integral weight mock modular forms
  111. Dunkl analogue of Szász-mirakjan operators of blending type
  112. Majorization, “useful” Csiszár divergence and “useful” Zipf-Mandelbrot law
  113. Global stability of a distributed delayed viral model with general incidence rate
  114. Analyzing a generalized pest-natural enemy model with nonlinear impulsive control
  115. Boundary value problems of a discrete generalized beam equation via variational methods
  116. Common fixed point theorem of six self-mappings in Menger spaces using (CLRST) property
  117. Periodic and subharmonic solutions for a 2nth-order p-Laplacian difference equation containing both advances and retardations
  118. Spectrum of free-form Sudoku graphs
  119. Regularity of fuzzy convergence spaces
  120. The well-posedness of solution to a compressible non-Newtonian fluid with self-gravitational potential
  121. On further refinements for Young inequalities
  122. Pretty good state transfer on 1-sum of star graphs
  123. On a conjecture about generalized Q-recurrence
  124. Univariate approximating schemes and their non-tensor product generalization
  125. Multi-term fractional differential equations with nonlocal boundary conditions
  126. Homoclinic and heteroclinic solutions to a hepatitis C evolution model
  127. Regularity of one-sided multilinear fractional maximal functions
  128. Galois connections between sets of paths and closure operators in simple graphs
  129. KGSA: A Gravitational Search Algorithm for Multimodal Optimization based on K-Means Niching Technique and a Novel Elitism Strategy
  130. θ-type Calderón-Zygmund Operators and Commutators in Variable Exponents Herz space
  131. An integral that counts the zeros of a function
  132. On rough sets induced by fuzzy relations approach in semigroups
  133. Computational uncertainty quantification for random non-autonomous second order linear differential equations via adapted gPC: a comparative case study with random Fröbenius method and Monte Carlo simulation
  134. The fourth order strongly noncanonical operators
  135. Topical Issue on Cyber-security Mathematics
  136. Review of Cryptographic Schemes applied to Remote Electronic Voting systems: remaining challenges and the upcoming post-quantum paradigm
  137. Linearity in decimation-based generators: an improved cryptanalysis on the shrinking generator
  138. On dynamic network security: A random decentering algorithm on graphs
Heruntergeladen am 6.9.2025 von https://www.degruyterbrill.com/document/doi/10.1515/math-2018-0049/html?lang=de
Button zum nach oben scrollen