European Journal of Operational Research 173 (2006) 351–369
www.elsevier.com/locate/ejor
Continuous Optimization
Fractional programming with convex quadratic
forms and functions
Harold P. Benson *
Department of Decision and Information Sciences, University of Florida, Gainesville, FL 32611, USA
Received 7 March 2004; accepted 7 February 2005
Available online 13 June 2005
Abstract
This article is concerned with two global optimization problems (P1) and (P2). Each of these problems is a fractional
programming problem involving the maximization of a ratio of a convex function to a convex function, where at least
one of the convex functions is a quadratic form. First, the article presents and validates a number of theoretical prop-
erties of these problems. Included among these properties is the result that, under a mild assumption, any globally opti-
mal solution for problem (P1) must belong to the boundary of its feasible region. Also among these properties is a result
that shows that problem (P2) can be reformulated as a convex maximization problem. Second, the article presents
for the first time an algorithm for globally solving problem (P2). The algorithm is a branch and bound algorithm in
which the main computational effort involves solving a sequence of convex programming problems. Convergence prop-
erties of the algorithm are presented, and computational issues that arise in implementing the algorithm are discussed.
Preliminary indications are that the algorithm can be expected to provide a practical approach for solving problem (P2),
provided that the number of variables is not too large.
Ó 2005 Elsevier B.V. All rights reserved.
Keywords: Global optimization; Fractional programming; Quadratic form; Branch and bound
1. Introduction
Consider the single-ratio fractional program
s.t. x 2 X ;
ð1Þ
nðxÞ
dðxÞ ;
sup
* Tel.: +1 352 392 0134; fax: +1 352 392 5438.
E-mail address: harold.benson@cba.ufl.edu
0377-2217/$ - see front matter Ó 2005 Elsevier B.V. All rights reserved.
doi:10.1016/j.ejor.2005.02.069
352
H.P. Benson / European Journal of Operational Research 173 (2006) 351–369
where X is a nonempty, closed convex set in Rn and n and d are real-valued functions defined on X such that
d(x) > 0 for all x 2 X. Systematic studies and applications of single-ratio fractional programs generally be-
gan to appear in the literature in the early 1960s. Since then, a rich body of work on the classification, the-
ory, solution and applications of these problems has been published. For an overview of this work, see, for
instance, the monographs by Craven [5], Martos [21] and Stancu-Minasian [27], the surveys contained in
the articles by Frenk and Schaible [10,11] and Schaible [23,25] and references therein.
Suppose that, in addition to the assumptions given above for problem (1), n is concave on X, d is convex
on X and, if d is not affine, then n is nonnegative on X. Then problem (1) is called as concave fractional
program. A concave fractional program shares many important properties with general concave nonlinear
programs. In particular, any local maximum for a concave fractional program is a global maximum, and, if
n and d are differentiable on an open set containing X, then a solution of the Karush–Kuhn–Tucker opti-
mality conditions is a maximum [1,21]. In addition, the objective function of a concave fractional program
is semistrictly quasiconcave [1], and, by using an appropriate variable transformation, a concave fractional
program can be formulated as an equivalent concave program [10,26]. Because of these properties, there are
a number of approaches available for globally solving concave fractional programs. In fact, by transform-
ing a concave fractional program (1) into an equivalent concave program, a great number of the methods of
concave programming become available for solving problem (1).
When the additional assumptions for problem (1) given in the previous paragraph do not hold, problem
(1) is called a nonconcave fractional program. Nonconcave fractional programs arise in certain important
applications. Included among these applications, for instance, are certain portfolio selection problems
[19,22], stochastic decision making problems [27] and problems in economics [25]. In a nonconcave frac-
tional program, a local maximum need not be a global maximum, and in general, many locally optimal
solutions will exist that are not globally optimal solutions. Nonconcave fractional programs therefore fall
into the domain of global optimization [16,28].
Most of the theoretical and algorithmic work in fractional programming applies only to concave frac-
tional programs or to special cases of concave fractional programs. For this reason, Frenk and Schaible
[10] and Schaible [24] have encouraged that more research be done into the solutions of nonconcave frac-
tional programs.
In this article, we are concerned with two nonconcave fractional programming problems. Each of these
fractional programs involves maximizing a ratio of a convex function to a convex function, where at least
one of the convex functions is a quadratic form. Let Q be an n · n positive semidefinite matrix of real num-
bers. Assume without loss of generality that Q is symmetric.
The first problem (P1) of concern is given by
max q1ðxÞ¼: xTQx
xTPx
;
s:t: x 2 X ;
ðP1Þ
where P is an n · n positive definite matrix of real numbers, and X is a nonempty, compact convex set in Rn
such that 0 62 X. We assume without loss of generality that P is symmetric.
The second problem (P2) of interest is the problem
max q2ðxÞ¼: xTQx
gðxÞ ;
ðP2Þ
where X is a nonempty, compact convex set in Rn, and g : Rn ! R is a convex function such that g(x) > 0
for all x 2 X. Notice that problem (P1) is a special case of problem (P2).
s:t: x 2 X ;
Our study of problem (P1) is motivated by the work of Lo and MacKinlay [19] and of Gotoh and Konno
[13]. In [19], Lo and MacKinlay formulate a ‘‘maximally predictable portfolio’’ problem that has the form
of problem (P1) with X equal to a polyhedron. The goal of the maximally predictable portfolio problem is
to pick a portfolio of investments that maximizes the ratio of the variability in the portfolio return predicted
H.P. Benson / European Journal of Operational Research 173 (2006) 351–369
353
by a linear regression model to the variability of the portfolio return. Lo and MacKinlay argue that a port-
folio that maximizes this ratio gives important new insights into asset return findings that are based upon
less formal methods.
In [13], Gotoh and Konno propose an algorithm for globally solving problem (P1) under the additional
assumption that X is polyhedral. This algorithm implements the fractional programming approach of Din-
kelbach [6] by using IbarakiÕs binary search algorithm [17]. In general, the main computational effort re-
quired in the Gotoh–Konno algorithm arises from the need to solve a sequence of separable, nonconvex
optimization problems by a global optimization algorithm of Falk and Soland [9].
We study problem (P2) because it is a natural extension of the nonconvex fractional program (P1), and
because of the need expressed by others for more research results for nonconvex fractional programs [10,24].
The purpose of this article is two-fold. First, the article presents and validates some theoretical properties
of problems (P1) and (P2). Included among these properties is the result that, under a mild assumption, any
globally optimal solution for problem (P1) must belong to the boundary of X. Also among these properties
is a result showing that problem (P2) can be reformulated as a maximization of a convex function over a
nonempty, compact convex set.
The second purpose of the article is to propose an algorithm for globally solving problem (P2). This
algorithm uses the branch and bound approach. In contrast to the Gotoh–Konno algorithm, the branch
and bound algorithm calls for solving a single, nonconvex program rather than a sequence of nonconvex
programs. Indications are that the branch and bound algorithm can be expected to provide a practical ap-
proach for globally solving problem (P2), at least for values of n that are not too large.
The article is organized as follows. Theoretical results for problems (P1) and (P2) are given in Section 2.
Section 3 presents and discusses the branch and bound algorithm for globally solving problem (P2). In Sec-
tion 4, the solution of a sample problem via the algorithm is described. Some concluding observations are
given in Section 5.
2. Theoretical properties
First, we concretely demonstrate that problems (P1) and (P2) are global optimization problems. To help
accomplish this, we need the following definitions and result.
Definition 1. Let f be a real-valued function defined on a convex set C, where C Rn. Then f is a
quasiconcave (quasiconvex) function on C when
f½kx1 þ ð1 kÞx2 P min½fðx1Þ; fðx2Þ
ðf½kx1 þ ð1 kÞx2 6 max½fðx1Þ; fðx2ÞÞ
for every x1, x2 2 C and k such that 0 6 k 6 1.
Quasiconcave functions belong to the set of generalized concave functions, i.e., they possess some, but not
all, of the properties of concave functions. In fact, quasiconcave functions constitute the weakest class of
generalized concave functions. Similar comments apply to quasiconvex functions [1].
The next result is well known (see, e.g., [20]).
Proposition 1. Let f be a real-valued function defined on the convex set C Rn. Then f is quasiconcave
(quasiconvex) on C if and only if
fx 2 CjfðxÞ P ag
ðfx 2 CjfðxÞ 6 agÞ
is a convex set for each a 2 R.
354
H.P. Benson / European Journal of Operational Research 173 (2006) 351–369
The following two examples demonstrate that the objective function of problem (P1) is not, in general,
quasiconcave over X, and it is not, in general, quasiconvex over X.
Example 1. Consider problem (P1) with
X ¼ fðx1; x2Þ 2 R2j x1 þ 4x2 P 2; 2 6 x1 6 2; x2 6 2g;
P ¼ 1
0
0
2
.
;
Q ¼ 1 0
0 1
Let
LU ¼ x 2 Xjq1ðxÞ P 5
6
.
Then, by Proposition 1, since (2, 1), ( 2, 1) 2 LU but (0, 1) 62 LU, q1 is not quasiconcave on X.
Example 2. In problem (P1), let
X ¼ fðx1; x2Þ 2 R2jx1 þ x2 P 1; xj 6 4; j ¼ 1; 2g;
P ¼ 1
0
0
6
.
Q ¼ 1 2
2 6
;
Consider
LL ¼ fx 2 Xjq1ðxÞ 6 1g.
Then (1, 0), (0, 1) 2 LL, but
1
2 ; 1
2
62 LL. Therefore, by Proposition 1, q1(x) is not quasiconvex on X.
From Examples 1 and 2, we surmise that the nonconcave fractional programs (P1) and (P2) do not share
the basic properties of concave maximization or, even, of convex maximization problems. For instance, we
do not expect problems (P1) and (P2) to have the property that any locally maximal solution is globally
maximal. In fact, as we shall see, they do not have this property, and a stronger statement can be made.
To make this statement, we need the following definition.
Definition 2. Let f be a real-valued function defined on a set C Rn. A point x* 2 C is called a strict locally
maximal solution of f over C when there exists a d > 0 such that for every x 5 x* that belongs to the set Nd
(x*) \ C, where
N dðxÞ ¼ fx 2 Rnjkx xk < dg;
we have f(x) < f(x*).
From [1], if f is a quasiconcave function defined on a convex set C Rn, then any strict locally maximal
solution of f over C is a (strict) globally maximal solution of f on C. However, for problems (P1) and (P2),
the corresponding result does not hold, even if X is polyhedral. For instance, in Example 1, it can be shown
that x ¼ ð2; 1Þ is a strict locally maximal solution for q1(x) over X with q1ðxÞ ¼ 5
6. However, since
q1( 2, 0) = 1 and ( 2, 0) 2 X, x is not a globally maximal solution of q1 over X.
We have thus shown that in problems (P1) and (P2), locally maximal solutions, even in the strict sense,
need not be globally maximal solutions. From [1], we may view the reason for this to be the fact that q1(x)
and q2(x) need not be quasiconcave functions on X.
H.P. Benson / European Journal of Operational Research 173 (2006) 351–369
355
For certain classes of global optimization problems, locally or globally optimal solutions are guaranteed
to exist on the boundaries or on certain subsets of the boundaries of their feasible regions. For instance, if a
function g is continuous and quasiconvex on Rn, then the global maximum of g over a nonempty, compact
convex set Y in Rn is attained at some extreme point of Y [15]. However for problem (P2), there is no guar-
antee that the global maximum is attained at an extreme point of X, or even on the boundary of X. This is
shown by the following example. For any set Y, let oY and (int Y) denote the boundary and the interior of
Y, respectively.
Example 3. In problem (P2), let
X ¼ fðx1; x2Þ 2 R2j0 6 xj 6 2; j ¼ 1; 2g;
"
Q ¼ 1
0
#
0
1
;
and, for each ðx1; x2Þ 2 R2, define g(x1, x2) by
gðx1; x2Þ ¼ x4
1 þ x4
2 þ 1.
Then (1, 1) 2 (int X) and q2ð1; 1Þ ¼ 2
q2(a, 0), q2(0, a), q2(2, a) and q2(a, 2) are all less than 2
mum of q2(x1, x2) over X is not attained on oX.
3. For any a such that 0 6 a 6 2, it is not difficult to show that
3. This implies that, in this problem, the global maxi-
For problem (P1), certain guarantees exist concerning the locations for globally and locally optimal solu-
tions. To help establish the first of these guarantees, we need the following result. For a proof, see Prop-
osition 3.18 and its proof in [28].
Proposition 2. Let M be an n · n symmetric indefinite matrix of real numbers, and let Y Rn be a nonempty,
compact set. Then any globally optimal solution to the problem
max yTMy;
must belong to oY.
s.t. y 2 Y
Theorem 1. Let v1 denote the globally optimal value of problem (P1). Suppose that
v1 < max q1ðxÞ;
s:t: x 2 Rn n f0g.
Then any globally optimal solution for problem (P1) must belong to oX.
ð2Þ
Proof. Let x* be a globally optimal solution for problem (P1). Then, from Dinkelbach [6], x* is a globally
optimal solution to the problem
max xTQx v1xTP x;
s:t: x 2 X .
Notice that this problem may be written
max xTðQ v1PÞx;
s:t: x 2 X .
ð3Þ
From Gantmacher [12],
k ¼ max q1ðxÞ;
s:t: x 2 Rn n f0g;
356
H.P. Benson / European Journal of Operational Research 173 (2006) 351–369
1Q. Thus, by assumption (2), v1 < k. From Gotoh and
where k is the largest eigenvalue of the matrix P
Konno [13], this implies that (Q v1P) is an indefinite matrix. Since x* is a globally optimal solution to
the problem (3), by Proposition 2, this implies that x* 2 oX. h
Remark 1. Assumption (2) in Theorem 1 assumes that the condition x 2 X in problem (P1) is essential in
the sense that if this condition were removed, then values for q1(x) larger than v1 could be achieved. Thus,
Theorem 1 states that in problem (P1), whenever the condition x 2 X is essential, every globally optimal
solution for problem (P1) must belong to oX.
Remark 2. Let T = P
k ¼ max q1ðxÞ;
1Q. As explained in the proof of Theorem 1,
s:t: x 2 Rn n f0g;
where k is the largest eigenvalue of the matrix T. Furthermore, from Gantmacher [12], k is attained only by
eigenvectors associated with the eigenvalue k of T. To check whether or not assumption (2) holds in The-
orem 1, we may therefore proceed by computing k and then solving the convex programming problem
max x1;
s:t:
ðT kIÞx ¼ 0;
x 2 X ;
where I denotes the n · n identity matrix. This problem either has an optimal solution or it is infeasible. If
the former holds, then assumption (2) fails, while if the latter holds, then assumption (2) holds.
Remark 3. Theorem 1 is not valid if assumption (2) is deleted. To see this, consider problem (P1) with
X ¼ fðx1; x2Þ 2 R2j1 6 x1 6 2; 1 6 x2 6 2g;
"
Q ¼ 1
0
#
;
0
1
2
"
P ¼ 1
0
#
0
1
.
In this case, k ¼ 1 is the largest eigenvalue of T, and the eigenvectors associated with k are all points in R2
of the form ðx1; 0Þ, where x1 6¼ 0. Therefore, in this case,
1 ¼ max q1ðxÞ;
ð4Þ
and the maximum in (4) is attained by any point in R2 of the form ðx1; 0Þ, where x1 6¼ 0. From this it is easy
to see that the globally optimal solution set for the problem (P1) is given by
s:t: x 2 R2 n f0g;
fðx1; 0Þj1 6 x1 6 2g.
Notice that this set contains many points in (int X), so that in this problem, the conclusion of Theorem 1
does not hold. This is because in this problem, v1 = 1, so that, by (4), assumption (2) does not hold.
Remark 4. The example given in Remark 3 also shows that problems (P1) and (P2) need not have globally
optimal solutions that are extreme points of X.
Even without assumption (2), there must exist a globally optimal solution for problem (P1) that belongs
to oX. In fact, we have the following stronger result.
Theorem 2. Let X 0 be a nonempty compact set in Rn such that 0 62 X 0. Let Q and P be n · n, symmetric
positive semidefinite and positive definite matrices, respectively. Then any locally optimal value for the
maximum of q1(x) over X 0 is attained by at least one point that belongs to oX 0.
H.P. Benson / European Journal of Operational Research 173 (2006) 351–369
357
Proof. Let v1 denote a locally optimal value for the maximum of q1(x) over X 0, and let x 2 X 0 satisfy
q1ðxÞ ¼ v1.
suppose that x 62 oX 0. Then
x 2 ðint X 0Þ. Consider the set F given by
F ¼ fb 2 Rjb P 0;ð1 þ bÞx 2 X 0g.
then the desired result
follows. Therefore,
If x 2 oX 0,
Since x 2 ðint X 0Þ, F 5 /. Also, because X 0 is compact, it is easy to show that F is compact. Therefore, we
may choose an optimal solution b > 0 to the problem
max b;
s:t: b 2 F .
ð5Þ
Notice that ð1 þ bÞx 2 oX 0, because if this were not the case, then for some e > 0, ð1 þ b þ eÞx would belong
to X 0, which contradicts the optimality of b in (5). Also, by the definition of q1(x), q1½ð1 þ bÞx ¼ q1ðxÞ ¼ v1.
Thus, the point ð1 þ bÞx has the desired properties, and the proof is complete. h
It can be shown that for a concave fractional program (1) in which n(x) and d(x) are continuous func-
tions, the globally optimal solution set is a convex set. Since problems (P1) and (P2) are nonconcave frac-
tional programs, we may not expect this property to hold for these problems. Indeed, it does not, as can be
shown by examples. For brevity, however, we omit providing such an example.
Although the objective functions of problem (P1) and (P2) are generally neither quasiconcave nor quasi-
convex over X, the next result shows that these problems can be reformulated as maximizations of quasi-
convex functions over nonempty, compact convex sets. To show this, let m and M satisfy
m ¼ min gðxÞ;
M P max gðxÞ;
s:t: x 2 X ;
s:t: x 2 X ;
and consider the problem (P3) given by
;
max q3ðx; tÞ¼: xTQx
t
t gðxÞ P 0;
s:t:
m 6 t 6 M;
x 2 X ;
ðP3Þ
where X, g(x) and Q are as given in problem (P2). Then problems (P2) and (P3) are equivalent in the sense
of the following result.
Proposition 3. If ðx; tÞ 2 Rnþ1 is a globally optimal solution for problem (P3), then x* is a globally optimal
solution for problem (P2). Conversely, if x* is a globally optimal solution for problem (P2), then ðx; tÞ 2 Rnþ1
is a globally optimal solution for problem (P3), where t* = g(x*).
Proof. The proof of this result is straightforward and therefore is omitted. h
Since X is a nonempty, compact convex set in Rn and g : Rn ! R is a convex function, it is easy to see
that the feasible region Z of problem (P3) is a nonempty, compact convex set in Rnþ1. Also, from Propo-
sitions 3.30 and 5.20 in [1], since Q is a symmetric, positive semidefinite matrix and g(x) is positive for all
x 2 X, q3(x, t) is a continuous, quasiconvex function on Z. From [16], this implies that the global maximum
of problem (P3) is attained at an extreme point of Z, and that various global optimization algorithms for
convex maximization problems, including, for example, outer approximation algorithms, can potentially be
effectively used to globally solve problem (P3). By Proposition 3, this provides a potential avenue for glob-
ally solving problems (P1) and (P2).
358
H.P. Benson / European Journal of Operational Research 173 (2006) 351–369
To close this section, we will show that problem (P2) and, therefore problem (P1), can be reformulated as
a maximization of a convex function over a nonempty, compact convex set. To yield a more concrete result,
we assume henceforth in problem (P2) that X is given by
X ¼ fx 2 RnjgiðxÞ 6 0; i ¼ 1; 2; . . . ; q; L 6 x 6 Ug;
ð6Þ
where L, U 2 Rn and, for each i = 1, 2, . . . , q, gi (x) is a convex function on Rn. The assumption that
L 6 x 6 U
for all x 2 X can be made without loss of generality, since X is a compact set.
Since Q is an n · n symmetric positive semidefinite matrix of real numbers, it is well known that we may
choose an orthonormal basis {w1, w2, . . . , wn} of Rn such that for each j 2 {1, 2, . . . , n}, w j is an eigenvector
of Q with eigenvalue aj P 0 [14]. Let W be the n · n matrix with columns w1, w2, . . . , wn. The matrix W and
the eigenvalues aj, j = 1, 2, . . . , n, of Q can be found, for instance, by using the characteristic polynomial of
Q to compute the eigenvalues a1,a2, . . . , an. The corresponding orthonormal eigenvectors w1, w2, . . . , wn can
then be easily found.
Let h(y) and hi(y), i = 1, 2, . . . , q, be functions defined for each y 2 Rn by
hðyÞ ¼ gðWyÞ
and
respectively, and consider the problem (P4) given by
hiðyÞ ¼ giðWyÞ;
i ¼ 1; 2; . . . ; q;
X
max q4ðy; tÞ¼:
n
ajy2
j
;
t
s:t:
j¼1
t hðyÞ P 0;
m 6 t 6 M;
hiðyÞ 6 0;
L 6 Wy 6 U.
i ¼ 1; 2; . . . ; q;
ð7Þ
ð8Þ
ðP4Þ
Theorem 3.
(a) If ðy; tÞ 2 Rnþ1 is a globally optimal solution for problem (P4), then x* = Wy* is a globally optimal
(b) Conversely, if x* is a globally optimal solution for problem (P2), then ðy; tÞ 2 Rnþ1 is a globally optimal
solution for problem (P2).
solution for problem (P4), where y* = W
1x* and t* = g(x*).
Proof. (a) Let ðy; tÞ 2 Rnþ1 be a globally optimal solution for problem (P4), and let x* = Wy*. Then
t gðxÞ ¼ t gðWyÞ ¼ t hðyÞ P 0;
ð9Þ
where the second equation is from (7) and the inequality follows from the feasibility of (y*, t*) in problem
(P4). Similarly, it follows from (8) and the feasibility of (y*, t*) in problem (P4) that
giðxÞ 6 0;
i ¼ 1; 2; . . . ; q.
ð10Þ