April 8, 2005

17:44

01236

International Journal of Bifurcation and Chaos, Vol. 15, No. 3 (2005) 827–839 c World Scientific Publishing Company 

NEWTON FLOW AND INTERIOR POINT METHODS IN LINEAR PROGRAMMING JEAN-PIERRE DEDIEU MIP. D´epartement de Math´ematique, Universit´e Paul Sabatier, 31062 Toulouse cedex 04, France MIKE SHUB Department of Mathematics, University of Toronto, 100 St. George Street, Toronto, Ontario M5S 3G3, Canada Received January 12, 2004; Revised June 17, 2004 We study the geometry of the central paths of linear programming theory. These paths are the solution curves of the Newton vector field of the logarithmic barrier function. This vector field extends to the boundary of the polytope and we study the main properties of this extension: continuity, analyticity, singularities. Keywords: Linear programming; interior point method; central path; Newton vector field; extension.

1. Introduction

except for orbits which come close to an orbit in a face of dimension i which itself comes close to a singularity in a boundary face of dimension less than i. This orbit is then forced to turn almost parallel to the lower dimensional face so its tangent vector may be forced to turn as well. See the two figures at the end of this paper. As this process involves a reduction of the dimension of the face it can only happen for the dimension of the polytopetimes. So our optimistic conjecture is that the total curvature of a central path is O(n). We have verified the conjecture in an average sense in [Dedieu et al.]. It is not difficult to give an example showing that O(n) is the best possible for the worst case. Such an example is worked out in [Megiddo & Shub, 1989]. The average behavior may be however much better. Ultimately we hope that an understanding of the curvature of the central paths may contribute to the analysis of algorithms which use them. In [Vavasis & Ye, 1996] the authors explore similar structure to give an algorithm whose running time depends only on the polytope. We prove in Corollary 4.1 that the extended vector field is Lipschitz on the closed polytope.

In this paper we take up once again the subject of the geometry of the central paths of linear programming theory. We study the boundary behavior of these paths as in [Megiddo & Shub, 1989], but from a different perspective and with a different emphasis. Our main goal will be to give a global picture of the central paths even for degenerate problems as solution curves of the Newton vector field, N (x), of the logarithmic barrier function which we describe below. See also [Bayer & Lagarias, 1989a, 1989b, 1991]. The Newton vector field extends to the boundary of the polytope. It has the properties that it is tangent to the boundary and restricted to any face of dimension i it has a unique source with unstable manifold dimension equal to i, the rest of the orbits tending to the boundary of the face. Every orbit tends either to a vertex or one of these sources in a face. See Corollary 4.1. This highly cellular structure of the flow lends itself to the conjecture that the total curvature of these central paths may be linearly bounded by the dimension n of the polytope. The orbits may be relatively straight, 827

April 8, 2005

828

17:44

01236

J.-P. Dedieu & M. Shub

Under a genericity hypothesis we prove in Theorem 5.1 that it extends to be real analytic on a neighborhood of the polytope. Under the same genericity hypothesis we prove in Theorem 5.2 that the singularities are all hyperbolic. The eigenvalues of −N (x) at the singularities are all +1 tangent to the face and −1 transversal to the face. In dynamical systems terminology the vector field is Morse–Smale. The vertices are the sinks. Finally, we mention that in order to prove that N (x) always extends continuously to the boundary of the polytope we prove Lemma 4.2 which may be of independent interest about the continuity of the Moore–Penrose inverse of a family of linear maps of variable rank.

2. The Central Path is a Trajectory of the Newton Vector Field Linear programming problems are frequently presented in different formats. We will work with one of them here which we find convenient. The polytopes defined in one format are usually affinely equivalent to the polytopes defined in another. So we begin with a discussion of Newton vector fields and how they transform under affine equivalence. This material is quite standard. An excellent source for this fact and linear programming in general is [Renegar, 2001]. Let Q be an affine subspace of Rn (or a Hilbert space if you prefer, in which case, assume Q is closed). Denote the tangent space of Q by V. Suppose that U is an open subset of Q. Let f : U → R be twice continuously differentiable. The derivative Df (x) belongs to L(V, R), the linear maps from V to R. So Df (x) defines a map from U to L(V, R). The second derivative D2f (x) is an element of L(V, L(V, R)). Thus D 2f (x) is a linear map from a vector space to another isomorphic space and D 2f (x) may be invertible. If f is as above and D 2f (x) is invertible we define the Newton vector field, Nf (x) by Definition 2.1.

−1

Nf (x) = −(D f (x)) 2

and D2f (x)(u, v) = u, (hess f (x))v. It follows then that Nf (x) = −(hess f (x))−1 grad f (x). Now let A be an affine map from P to Q whose linear part L is an isomorphism. Suppose U1 is open in P and A(U1 ) ⊆ U . Let g = f ◦ A. Proposition 2.1. A maps the solution curves of Ng

to the solution curves of Nf . Proof.

By the chain rule Dg(y) = Df (A(y))L and D2g(y)(u, v) = D2f (A(y))(Lu, Lv).

So u = Ng (y) if and only if D 2g(y)(u, v) =−Dg(y)(v) for all v if and only if D2f (A(y))(Lu, Lv) = −Df (A(y))Lv for all v, i.e. Nf (A(y)) = L(u) or LNg (y) = Nf A(y). This last is the equation expressing that the vector field Nf is the push forward by the map of the vector field Ng and hence the solution curves of the Ng field are mapped by A to the solution curves of Nf .  Now we make explicit the linear programming format used in this paper, define the central paths and relate them to the Newton vector field of the logarithmic barrier function. Let P be a compact polytope in Rn defined by m affine inequalities Ai x ≥ bi ,

Here Ai x denotes the matrix product of the row vector Ai = (ai1 , . . . , ain ) by the column vector x = (x1 , . . . , xn )T , A is the m × n matrix with rows Ai and we assume rank A = n. Given c ∈ Rn , we consider the linear programming problem (LP )

Df (x)u = u, grad f (x)

min c, x.

Ai x≥bi 1≤i≤m

Let us denote by

Df (x).

Note that if V has a nondegenerate inner product  ,  then the gradient of f , grad f (x) ∈ V, and Hessian, hess f (x) ∈ L(V, V), are defined by

1 ≤ i ≤ m.

f (x) =

m 

ln(Ai x − bi )

i=1

(ln(s) = −∞ when s ≤ 0) the logarithmic barrier function associated with the description Ax ≥ b of P. The barrier technique considers the family of

April 8, 2005

17:44

01236

Newton Flow and Interior Point Methods in Linear Programming

nonlinear convex optimization problems (LP (t))

min tc, x − f (x)

x∈Rn

with t > 0. The objective function ft (x) = t c, x − f (x) is strictly convex, smooth, and satisfies lim ft (x) = ∞.

x→∂P x∈Int P

in Rn we obtain a family of curves. Our aim in this paper is to investigate the structure of this family. For a subspace B ⊂ Rm we denote by ΠB the orthogonal projection Rm → B. Let b1 , . . . , br be a basis of B and let us denote by B the m × r matrix with columns of the vectors bi . Then ΠB , also denoted ΠB , is given by ΠB = B(B T B)−1 B T = BB † (B † is the generalized inverse of B equal to (B T B)−1 B T because B is injective). Definition 2.2. The Newton vector field associated

with g is

Thus, there exists a unique optimal solution γ(t) to (LP (t)) for any t > 0. This curve is called the central path of our problem. Let us denote as Dx the m × m diagonal matrix Dx = Diag(Ai x − bi ). This matrix is nonsingular for any x ∈ Int P. We also let e = (1, . . . , 1)T ∈ Rm , g(x) = grad f (x) =

m  i=1

ATi = AT Dx−1 e Ai x − bi

and h(x) = hess f (x) = −AT Dx−2 A. Since ft is smooth and strictly convex the central path is given by the equation grad ft (γ(t)) = 0 i.e. g(γ(t)) = tc,

t > 0.

N (x) = −Dg(x)−1 g(x) = (AT Dx−2 A)−1 AT Dx−1 e = A† Dx ΠDx−1 A e. It is defined and analytic on Int P. Note that the expression A† Dx ΠDx−1 A e is defined for all x ∈ Rn for which Ai x−bi is not equal to 0 for all i. Thus N (x) is defined by the rational expression in Definition 2.2 for almost all x ∈ Rn . Later we will prove that this rational expression has a continuous extension to all Rn . Lemma 2.2. The central paths γ(t), c ∈ Rn , are the

trajectories of the vector field −N (x). A central path is given by

Proof.

When t → 0, the limit of γ(t) is given by −f (γ(0)) = minn −f (x). x∈R

g(γ(t)) = tc,

Lemma 2.1. g : Int P → Rn is real analytic and

invertible. Its inverse is also real analytic. For any c ∈ Rn the optimization problem

g(d(s)) = exp(s)c,

d g(d(s)) = exp(s)c = g(d(s)). ds ˙ = (d/ds)d(s). We have Let us denote d(s)

min c, x − f (x)

According to this lemma, the central path is the inverse image by g of the ray cR+ . When c varies

s ∈ R,

so that

d ˙ g(d(s)) = Dg(d(s))d(s) ds

x∈Rn

has a unique solution in Int P because the objective function is smooth, strictly convex and P is compact. Thus g(x) = c has a unique solution that is g bijective. We also notice that, for any x, Dg(x) is nonsingular. Thus g−1 is real analytic by the inverse function theorem. 

t > 0,

for a given c ∈ Rn . Let us change variable: t = exp s and d(s) = γ(t) with s ∈ R. Then

It is called the analytic center of P and denoted by cP .

Proof.

829

thus ˙ = Dg(d(s))−1 g(d(s)) = −N (d(s)) d(s) and d(s) is a trajectory of the Newton vec˙ tor field. Conversely, if d(s) = −N (d(s)) = −1 Dg(d(s)) g(d(s)), s ∈ R, then d ˙ = g(d(s)) g(d(s)) = Dg(d(s))d(s) ds

April 8, 2005

830

17:44

01236

J.-P. Dedieu & M. Shub

so that

N (x) = 0 if and only if g(x) = 0, that is x = cP . 

Proof.

g(d(s)) = exp(s)g(d(0)) which is the central path related to c = g(d(0)).  Remark 2.1. The trajectories of N (x) and −N (x)

are the same with time reversed. As t → ∞, γ(t) tends to the optimal points of the linear programming problem. So we are interested in the positive time trajectories of −N (x).

Lemma 2.3. The analytic center cP is the unique

singular point of the Newton vector field N (x), x ∈ Int P.

3. An Analytic Expression for the Newton Vector Field In this section we compute an analytic expression for N (x) which will be useful later. For any subset Kn ⊂ {1, . . . , m}, Kn = {k1 < · · · < kn }, we denote by AKn the n × n submatrix of A with rows Ak1 , . . . , Akn , by bKn the vector in Rn with coordinates bk1 , . . . , bkn , and by uKn the unique solution of the system AKn uKn = bKn when the matrix AKn is nonsingular. With these notations we have: Proposition 3.1. For any x ∈ Int P,



N (x) =



(x − uKn )(det AKn )2

l∈Kn

Kn ⊂{1,...,m} det AKn =0



(Al x − bl )2

(det AKn )2

k=1

(Al x − bl )2

l∈Kn

Kn ⊂{1,...,m} det AKn =0

m Proof. Let  us denote Π = l=1 (Al x − bl ) (A x − b ) . We already know and Πk = l l l=k −1 T −1 T −2 (Definition 2.2) that N (x) = (A Dx A) A Dx e with m  aki akj T −2 (A Dx A)ij = (Ak x − bk )2



To compute X −1 we use Cramer’s formula: = cof (X)T/det(X) where cof (X) denotes the matrix of cofactors: cof (X)ij = (−1)i+j det(X ij ) with X ij the n−1×n−1 matrix obtained by deleting in X the ith row and jth column. We first compute det X. We have X −1

m 1  = 2 aki akj Π2k Π

1 Xij Π2 where X is the n × n matrix given by Xij =  m 2 k=1 aki akj Πk . Moreover (AT Dx−1 e)i =

m  k=1

aki Ak x − bk

1  = aki Πk Π m

where Sn is the group of permutations of {1, . . . , n} and (σ) the signature of σ. Thus

det X =

(σ)

1≤kj ≤m 1≤j≤n

×



n  m 

akj j akj σ(j) Π2kj

j=1 kj =1



=

=

N (x) = ΠX −1 V.

 σ∈Sn

k=1

1 Vi Π  where V is the n vector given by Vi = m k=1 aki Πk . This gives

(σ)X1σ(1) · · · Xnσ(n)

σ∈Sn

k=1

=



det X =

Π2k1 · · · Π2kn ak1 1 · · · akn n

(σ)ak1 σ(1) · · · akn σ(n)

σ∈Sn

=



1≤kj ≤m 1≤j≤n

Π2k1 · · · Π2kn ak1 1 · · · akn n det Ak1 ···kn

April 8, 2005

17:44

01236

Newton Flow and Interior Point Methods in Linear Programming

where Ak1 ···kn is the matrix with rows Ak1 · · · Akn . When two or more indices kj are equal the corresponding coefficient det Ak1 ···kn is zero. For this reason, instead of this sum taken for n independent indices kj we consider a set Kn ⊂ {1, . . . , m}, Kn = {k1 < · · · < kn }, and all the possible permutations σ ∈ S(Kn ). We obtain 

det X =

Kn ⊂{1,...,m} σ∈S(Kn )

=

m 

Πk

k=1

=

m 

=

m 

(σ)aσ(k1 )1 · · · aσ(kn )n det Ak1 ···kn Π2k1 · · · Π2kn (det AKn )2 .

Kn ⊂{1,...,m}

×



det X = Π2n−2 

(Al x − bl ) . 2

l∈Kn

Let us now compute Y = cof (X)T V . We have

Yi =

j=1 n 

i+j

(−1)

ji

det(X )

m 

Πk

Yi =

k=1

Πk



m 

ak1 1 · · · akn n Π2k1 · · · Π2kn

kj =1 1≤j≤n j=i

By a similar argument as before we sum up for any set with n − 1 elements Kn−1 ⊂ {1, . . . , m}, Kn−1 = {k1 < · · · < ki−1 < ki+1 < · · · < kn } and for any permutation σ ∈ S(Kn−1 ). We obtain as previously m 



Πk

Π2k1 · · · Π2kn

Kn−1 ⊂{1,...,m}

akj Πk

(−1)i+j det(X ij )akj

j=1

(σ)X1σ(1) · · · Xi−1σ(i−1) akσ(i)

× Xi+1σ(i+1) · · · Xnσ(n)

Πk

× det Aik1 ···ki−1 ki+1 ···kn det Ak1 ···ki−1 kki+1 ···kn

n 

σ∈Sn

m 

k=1

because X is symmetric. This last sum is the determinant of the matrix with rows X1 · · · Xi−1 Ak Xi+1 · · · Xn so that m 

(σ)ak1 σ(1) · · · akσ(i) · · · akn σ(n)

× det Ak1 ···ki−1 kki+1 ···kn .

Yi =

k=1

k=1

ak1 1 · · · akn n Π2k1 · · · Π2kn

kj =1 1≤j≤n j=i

k=1

n  (−1)i+j det(X ji )Vj Yi =

=



(det AKn )2

Kn ⊂{1,...,m}

j=1 m 

ak1 1 ak1 σ(1) Π2k1 · · · akn n akn σ(n) Π2kn

which gives

Note that, for any l = 1, . . . , m, the product Π2k1 · · · Π2kn contains (Al x − bl )2n if l ∈ Kn and (Al x − bl )2n−2 otherwise. For this reason

=

(σ)akσ(i)

σ∈Sn



×

j=1 kj =1 j=i

m 

Πk

k=1

σ∈S(Kn )

=

akj j akj σ(j) Π2kj

σ∈Sn

Kn ⊂{1,...,m}



m n  

kj =1 1≤j≤n j=i

× aσ(k1 )1 · · · aσ(kn )n det Aσ(k1 )···σ(kn )  = Π2k1 · · · Π2kn ×



Πk m 

×

(σ)akσ(i)

σ∈Sn

k=1

Π2σ(k1 ) · · · Π2σ(kn )



831

with Aik1 ···ki−1 ki+1 ···kn the matrix with rows Akj , j ∈ Kn−1 and the ith column removed. The quantity Al x − bl appears in the product Πk Π2k1 · · · Π2kn with an exponent equal to • • • •

2n − 1 2n − 2 2n − 3 2n − 4

when when when when

l = k l=k l = k l=k

and and and and

l ∈ Kn−1 , l ∈ Kn−1 , l ∈ Kn−1 , l ∈ Kn−1 .

In this latter case, two rows of the matrix Ak1 ···ki−1 kki+1 ···kn are equal and its determinant is zero. Thus, each term Al x−bl appears at least 2n−3

April 8, 2005

832

17:44

01236

J.-P. Dedieu & M. Shub

times so that Yi = Π2n−3

m 



(Ak x − bk )

k=1 Kn−1

(Al x − bl )2 det Aik1 ···ki−1 ki+1 ···kn det Ak1 ···ki−1 kki+1···kn .

l=k l∈Kn−1

The ith component of the Newton vector field is equal to N (x)i = ΠYi / det X so that m 

N (x)i =



(Ak x − bk )

k=1 Kn−1

(Al x − bl )2 det Aik1 ···ki−1 ki+1 ···kn det Ak1 ···ki−1 kki+1 ···kn

l=k l∈Kn−1



(det AKn )2



.

(Al x − bl )2

l∈Kn

Kn

Instead of a sum taken for k and Kn−1 in the numerator we use a subset Kn ⊂ {1, . . . , m} equal to the union of k and Kn−1 . Notice that det Ak1 ···ki−1 kki+1 ···kn = 0 when k ∈ Kn−1 so that this case is not considered. Conversely, for a given Kn = {k1 · · · kn }, we can write it in n different ways as a union of k = kj and Kn−1 = Kn \{kj }. For these reasons we get  n     ji (Akj x − bkj ) det AKn det AKn ,i,j (Al x − bl )2 N (x)i =

Kn

j=1

l∈Kn

  (det AKn )2 (Al x − bl )2 l∈Kn

Kn

with Aji Kn the matrix obtained from AKn in deleting the jth row and ith column, and AKn ,i,j obtained from AKn in removing the line Aj , and in reinserting it as the ith line, the other lines remaining with the same ordering. Note that det AKn ,i,j = (−1)i+j det AKn thus  n     ji (Akj x − bkj )(−1)i+j det AKn det AKn (Al x − bl )2 N (x)i =

Kn

j=1



(det AKn )

Kn

In fact this sum is taken for the sets Kn such that AKn is nonsingular, otherwise, the coefficient det AKn vanishes and the corresponding term is zero. According to Cramer’s formulas, the expression  −1 /det A is equal to A (−1)i+j det Aji K n Kn Kn ij . Thus n  (Akj x − bkj )(−1)i+j det Aji Kn

2



l∈Kn

.

(Al x − bl )

2

l∈Kn

We get   (xi − uKn ,i )(det AKn )2 (Al x − bl )2 N (x)i =

Kn

l∈Kn

  (det AKn )2 (Al x − bl )2 Kn

l∈Kn

and we are done. 

j=1

= (A−1 Kn (AKn x − bKn ))i   = xi − A−1 Kn bKn i = xi − uKn ,i .

4. Extension to the Faces of P Our aim is to extend the Newton vector field, defined in the interior of P, to its different faces.

April 8, 2005

17:44

01236

Newton Flow and Interior Point Methods in Linear Programming

Let PJ be the face of P defined by PJ = {x ∈ Rn : Ai x = bi for any i ∈ I and Ai x ≥ bi for any i ∈ J}. Here I is a subset of {1, 2, . . . , m} containing mI integers, J = {1, 2, . . . , m}\I and mJ = m − mI . Definition 4.1. The face PJ is regularly described when the relative interior of the face is given by

ri − PJ = {x ∈ R : Ai x = bi for any i ∈ I and Ai x > bi for any i ∈ J}. n

The polytope is regularly described when all its faces have this property. We assume here that P is regularly described. This definition avoids, for example, in the description of a PJ a hyperplane defined by two inequalities: Ai x ≥ bi and Ai x ≤ bi instead of Ai x = bi . Note that every face of a regularly described P has a unique regular description, the set I consists of all indices i such that Ai x = bi on the face. The affine hull of PJ is denoted by

Ai x − bi , i ∈ J (resp. i ∈ I). It defines a linear operator Dx,J : RmJ → RmJ . Since the faces of the polytope are regularly described, for any x ∈ ri − PJ , Dx,J is nonsingular. PJ is associated with the linear program (LPJ )

fJ (x) =

ln(Ai x − bi )

is defined for any x ∈ FJ and finite in ri − PJ the relative interior of PJ . The barrier technique considers the family of nonlinear convex optimization problems (LPJ (t)) min tc, x − fJ (x)

x∈FJ

with t > 0. The objective function ft,J (x) = tc, x − fJ (x) is smooth, strictly convex and lim ft,J (x) = ∞,

x→∂PJ

thus (LPJ (t)) has a unique solution γJ (t) ∈ ri − PJ given by Dft,J (γJ (t)) = 0. For any x ∈ ri − PJ , the first derivative of fJ is given by

We also let EJ = {y = (y1 , . . . , ym )T ∈ Rm : yi = 0 for any i ∈ I}.

DfJ (x)u =

 i∈J

EI is defined similarly. Let us denote by AJ (resp. AI ) the mJ × n (resp. mI × n) matrix whose ith row is Ai , i ∈ J (resp. i ∈ I). AJ defines a linear operator AJ : Rn → RmJ . We also let bJ = AJ |GJ

so that bTJ : RmJ → GJ ,

 i∈J

GJ = {x = (x1 , . . . , xn )T ∈ Rn : Ai x = 0 for any i ∈ I}.

bJ : GJ → RmJ ,

min c, x.

x∈PJ

The barrier function

FJ = {x = (x1 , . . . , xn )T ∈ Rn : Ai x = bi for any i ∈ I} which is parallel to the vector subspace

833

bTJ = ΠGJ AJ .

Here, for a vector subspace E, ΠE denotes the orthogonal projection onto E. Let Dx,J (resp. Dx,I ) be the diagonal matrix with diagonal entries

Ai u −1 = ATJ Dx,J eJ , u Ai x − bi

with u ∈ GJ and eJ = (1, . . . , 1)T ∈ RmJ . We have −1 eJ gJ (x) = grad fJ (x) = ΠGJ ATJ Dx,J −1 = bTJ Dx,J eJ .

The second derivative of fJ at x ∈ ri−PJ is given by D 2fJ (x)(u, v) = −

 (Ai u)(Ai v) i∈J

(Ai x − bi )2

−2 = −ATJ Dx,J AJ v, u −2 bJ v, u = −bTJ Dx,J

April 8, 2005

834

17:44

01236

J.-P. Dedieu & M. Shub

for any u, v ∈ GJ so that

The analytic center is also given by

−2 bJ . DgJ (x) = hess fJ (x) = −bTJ Dx,J

To PJ we associate the Newton vector field given by NJ (x) = −DgJ (x)−1 gJ (x),

x ∈ ri − PJ .

We have: Lemma 4.1. For any x ∈ ri − PJ this vector field

−1 bTJ Dx,J eJ = 0

Ai x = bi , i ∈ I,

Dy : Rm → Rm

−2 −1 bJ )−1 bTJ Dx,J eJ NJ (x) = (bTJ Dx,J

= b†J Dx,J Πim(D−1 bJ ) eJ ∈ GJ . x,J

We first have to prove that DgJ (x) is nonsingular and that NJ (x) ∈ GJ . This second point is clear. For the first, we take u ∈ GJ such that DgJ (x)u = 0. This gives AJ u = bJ u = 0 which implies Au = 0 because u ∈ GJ that is AI u = 0. Since A is injective we get u = 0. By the same argument we see that bJ is injective so that bTJ bJ is nonsingular. The first expression for NJ (x) comes from the description of gJ and DgJ . We have

given by the m × m diagonal matrix Dy = Diag(yi ). Let P be a vector subspace in Rm . Then, for any y ∈ Rm with nonzero coordinates, the operator Dy ◦ ΠDy−1 (P ) : Rm → Rm

Proof.

−2 −1 (bTJ Dx,J bJ )−1 bTJ Dx,J eJ

 −1 T −1 = bTJ bJ bJ Dx,J Dx,J bJ −2 −1 ×(bTJ Dx,J bJ )−1 bTJ Dx,J eJ

= b†J Dx,J Πim(D−1 b ) eJ ∈ GJ .  x,J J The curve γJ (t), 0 < t < ∞, is the central path of the face PJ . It is given by γJ (t) ∈ FJ

and DfJ (γJ (t)) − tc = 0

that is x ∈ FJ ,

−1 ATJ Dx,J eJ − tc ∈ G⊥ J

i ∈ I, and

−1 bTJ Dx,J eJ − tΠGJ c = 0

γJ (t) = x.

When t → 0, γJ (t) tends to the analytic center γJ (0) of PJ defined as the unique solution of the convex program −fJ (γJ (0)) = min −fJ (x). x∈FJ

is well defined. Can we extend its definition to any y ∈ Rm ? The answer is yes and proved in the following Lemma 4.2. Let y ∈ EJ be such that yi = 0 for

any i ∈ J. Then Dy |EJ : EJ → EJ is nonsingular and lim Dy ◦ ΠDy−1 (P ) = Dy |EJ ◦ Π(Dy |E

y→y

J

)−1 (P ∩EJ ) .

Proof. To prove this lemma we suppose that I = {1, 2, . . . , m1 } and J = {m1 + 1, . . . , m1 + m2 = m}. Let us denote p = dim P . P is identified to an n × p matrix with rank P = p. We also introduce the following matrices:



Dy,1 0 U 0 , P = . Dy = 0 Dy,2 V W The different blocks appearing in these two matrices have the following dimensions: Dy,1 : m1 × m1 , Dy,2 : m2 × m2 , U : m1 × p1 , V : m2 × p1 , W : m2 × p2 . „ « We also suppose that the columns of W0 are a basis „

and γJ (t) = x

or, projecting on GJ , Ai x = bi ,

γJ (0) = x

so that γJ (0) is the unique singular point of NJ in the face PJ . We now investigate the properties of this extended vector field: continuity, derivability and so on. We shall investigate the following abstract problem: for any y ∈ Rm we consider the linear operator

is defined and

NJ (x) =

and

«

for P ∩EJ and those of VU a basis of the orthogonal complement of P ∩EJ in P that is (P ∩EJ )⊥ ∩P. Let us notice that p2 ≤ m2 and rank W = p2 and also that p1 ≤ m1 and rank U = p1 . Let us prove this last assertion. Let Ui , 1 ≤ i ≤ p1 be the columns of U . If α1 U1 + · · · + αp1 Up1 = 0, we have     U1 Up1 + · · · + αp1 α1 Vp1 V1 =

0 α1 V1 + · · · + αp1 Vp1

.

April 8, 2005

17:44

01236

Newton Flow and Interior Point Methods in Linear Programming

The left-hand side of this equation is in (P ∩ EJ )⊥ ∩ P and the right-hand side in P ∩ EJ . Thus this vector is equal to 0 and since rank P = p we get α1 = · · · = αp1 = 0. For every subspace X in Rm with dim X = p identified with an m × p rank p matrix we have

and

 ΠEJ =

0

0

0

Im2

835

 .

We also notice that Dy ΠDy−1 P = Dy ΠEI ΠDy−1 P + Dy ΠEJ ΠDy−1 P .

ΠX = X(X T X)−1 X T .

We have

This gives here

lim Dy ΠEI ΠDy−1 P = 0.

y→y

ΠDy−1 P  =

−1 Dy,1 U

0

−1 V Dy,2

−1 Dy,2 W

 ×  ×

=

−2 V T Dy,2 W

−2 V W T Dy,2

−2 W T Dy,2 W

−1 U T Dy,1

−1 V T Dy,2

0

−1 W T Dy,2

Dy,1

0

0

Dy,2

 ×  =  =

ΠEI ΠDy−1 P ≤ 1

−2 −2 U + V T Dy,2 V U T Dy,1

Dy ΠEJ ΠDy−1 P 

This is a consequence of the two following





−1

We now have to study the limit



0

lim Dy ΠEJ ΠDy−1 P .

y→y

−2 U . The following identiLet us denote A = U T Dy,1 ties hold:

0



0 Im2

−1 U Dy,1

0

−1 V Dy,2

−1 Dy,2 W

−2 V T Dy,2 W

−2 V W T Dy,2

−2 W T Dy,2 W

0

V

W

0

0

V

W

 



lim Dy ΠEI = Dy ΠEI = 0.

y→y

−2 −2 U + V T Dy,2 V U T Dy,1

0

because it is the product of two orthogonal projections and

A

0

0

Im2





−1 

−1 U T Dy,1

−1 V T Dy,2

0

−1 W T Dy,2



−2 −2 V ) A−1 (V T Dy,2 W) Im1 + A−1 (V T Dy,2 −2 V W T Dy,2

−2 −2 V ) A−1 (V T Dy,2 W) Im1 + A−1 (V T Dy,2 −2 V W T Dy,2

−2 W T Dy,2 W

−1 

We will prove later that

 −1 =0 lim A−1 = lim A−1 U T Dy,1

lim Dy,2 = Dy,2

y→y

−1 U T Dy,1

−1 V T Dy,2

0

−1 W T Dy,2

−1 −1 ) A−1 (V T Dy,2 ) A−1 (U T Dy,1

−2 W T Dy,2 W

when y → y. Since

 −1 

0

−1 W T Dy,2

 .



April 8, 2005

836

17:44

01236

J.-P. Dedieu & M. Shub

is a nonsingular matrix we get lim Dy ΠEJ ΠDy−1 P   Im1 0 0 = −2 T V V W W Dy,2

y→y

 ×  =

0

0

0

−1

−2 W T Dy,2 W



−1 0 W T Dy,2

0



0

−2 −1 W )−1 W T Dy,2 0 W (W T Dy,2

and this last matrix represents the operator Dy |EJ ◦ Π(Dy |E

J

)−1 (P ∩EJ )

as announced in this lemma. To achieve the proof of this lemma we have to show that

 −1 =0 lim A−1 = lim A−1 U T Dy,1 −2 U . In fact it suffices to prove with A = U T Dy,1 −1 lim A = 0 because −1 2 )

A−1 (U T Dy,1 −1 −1 T = A−1 (U T Dy,1 )(A−1 (U T Dy,1 ))

= A−1 .

Corollary 4.1. The vector field N (x) extends continuously to all of Rn . Moreover it is Lipschitz on compact sets. When all the faces of the polytope P are regularly described, the continuous extension of N (x) to the face PJ of P equals NJ (x). Consequently any orbit of N (x) in the polytope P tends to one of the singularities of the extended vector field, i.e. either to a vertex or an analytic center of one of the faces.

It is a consequence of Definition 2.2, Lemmas 4.1 and 4.2 and the equality A†J y = A† y for any y ∈ EJ that N (x) extends continuously to all of Rn and equals NJ (x) on PJ . Moreover a rational function which is continuous on Rn has bounded partial derivatives on compact sets and hence is Lipschitz. Now we use the characterization of the vectorfield restricted to the face to see that any orbit which is not the analytic center of a face tends to the boundary of the face and any orbit which enters a small enough neighborhood of a vertex tends to that vertex.  Proof.

Remark 4.1. We have shown that N (x) is Lipschitz.

We do not know an example where it is not analytic and wonder as to what its order of smoothness is, in general. In the next section we will show it is analytic generically.

5. Analyticity and Derivatives

Since U is full rank, the matrix U T U is positive definite so that µ = min Spec(U T U ) > 0.

In Sec. 3 we gave the following expression for the Newton vector field: N (x)

Let us denote  the ordering on square matrices given by the cone of non-negative matrices. We have −2 Dy,1

1 1  Im1  Im 2

y 2 1 max yi

so that U

T

−2 Dy,1 U

1 µ  UT U  Ip . 2

y

y 2 1

Taking the inverses changes this inequality in the following −2 U )−1 ≺ 0 ≺ (U T Dy,1

y 2 µ

when y → y and we are done. 

Ip1 → 0

=



(x − uKn )(det AKn )2

Kn ⊂{1,...,m} det Kn =0

(Al x − bl )2

l∈Kn

Kn ⊂{1,...,m} det Kn =0





(det AKn )2



(Al x − bl )2

l∈Kn

for any x ∈ Int P. Under a mild geometric assumption, the denominator of this fraction never vanishes so that N (x) may be extended in a real analytic vector field. Theorem 5.1. Suppose that for any x ∈ ∂P contained in the relative interior of a codimension d face of P, we have Aki x = bki for exactly d indices in {1, . . . , m} and Al x > bl for the other indices. In that case the line vectors Aki , 1 ≤ i ≤ d, are linearly

April 8, 2005

17:44

01236

Newton Flow and Interior Point Methods in Linear Programming

independent. Moreover, for such an x   (det AKn )2 (Al x − bl )2 = 0 Kn ⊂{1,...,m} det Kn =0

The first one is a well-known fact about the Newton operator: its derivative is equal to −id at a zero (if N (x) = 0, then DN (x) = D(−Dg(x)−1 g(x)) = D(−Dg(x)−1 )g(x) − Dg(x)−1 Dg(x) = −id). The second fact is proved in Sec. 4: the restriction of N (x) to a face is the Newton vector field associated with the restriction of g(x) to this face. We have now take care of the 1 eigenvalues. To simplify the notations we suppose that Ai x = bi for 1 ≤ i ≤ d, Ai x > bi when i + 1 ≤ i ≤ m, and N (x) = 0. N is analytic and its derivative in the direction v is given by

l∈Kn

so that N (x) extends analytically to a neighborhood of P. Under this assumption, for any x ∈ P, there exists a subset Kn ⊂ {1, . . . , m} such that the submatrix AKn is nonsingular and Al x − bl > 0 for any x ∈ Kn .  Proof.

Our next objective is to describe the singular points of this extended vector field.

Num



DN (x)v =

umption, the singularities of the extended vector field are: the analytic center of the polytope and the analytic centers of the different faces of the polytope, including the vertices. Each of them is hyperbolic: if x ∈ ∂P is the analytic center of a codimension d face F of P, then the derivative DN (x) has n − d eigenvalues equal to −1 with corresponding eigenvectors contained in the linear space F0 parallel to F and d eigenvalues equal to 1 with corresponding eigenvectors contained in a complement of F0 .

with



Num =

v(det AKn )2



(Al x − bl )2

l∈Kn



(Al x − bl )2

l∈Kn

Kn ⊂{1,...,m} det Kn =0

+



(det AKn )2

Kn ⊂{1,...,m} det Kn =0

Theorem 5.2. Under the previous geometric ass-

(x − uKn )(det AKn )2

Kn ⊂{1,...,m} det Kn =0



×

Proof. The first part of this theorem, about the −1 eigenvalues, is the consequence of two facts.

2Al0 v(Al0 x − bl0 )

l0 ∈Kn



(Al x − bl )2

l∈Kn l=l0

which gives 

DN (x)v = v +

(x − uKn )(det AKn )2

Kn det Kn =0





2Al0 v(Al0 x − bl0 )

l0 ∈Kn





(det AKn )2

(Al x − bl )2

l∈Kn l=l0

(Al x − bl )2

l∈Kn

Kn ⊂{1,...,m} det Kn =0

where M is, up to a constant factor (i.e. constant in v), the n × n matrix equal to     2 2 (det AKn ) (Al0 x − bl0 ) (Al x − bl ) (x − uKn )Al0 l∈Kn

Kn det Kn =0 l0 ∈Kn

l=l0

which is also equal to  {1, . . . , d}⊂Kn det Kn =0 d+1≤l0 ≤m l0 ∈Kn

(det AKn )2 Al0 x − bl0



837

 l∈Kn

 (Al x − bl )2 (x − uKn )Al0

= v + Mv

April 8, 2005

838

17:44

01236

J.-P. Dedieu & M. Shub

because Ai x = bi when 1 ≤ i ≤ d and Ai x > bi otherwise. To prove our theorem we have to show that dim ker M ≥ d. This gives at least d independent vectors vi such that M vi = 0, that is, DN (x)vi = vi ; thus 1 is an eigenvalue of DN (x) and its multiplicity is ≥ d. In fact it is exactly d because we already have the eigenvalue −1 with multiplicity n − d. The inequality dim ker M ≥ d is given by rank M ≤ n − d. Why is it true? M is a linear combination of rank 1 matrices (x − uKn )Al0 so that the rank of M is less than or equal to the dimension of the system of vectors x − uKn with Kn as before. Since {1, . . . , d} ⊂ Kn , Ai x = bi when 1 ≤ i ≤ d, and AuKn = bKn we have A(x − uKn ) = (0, . . . , 0, yd+1 , . . . , ym )T . From the hypothesis, the line vectors A1 , . . . , Ad defining the face F are independent, thus the set of vectors u ∈ Rn such that the vector Au ∈ Rm begins by d zeros has dimension n − d and we are done. 

the positive time trajectories of −N (x). For −N (x) the eigenvalues at the critical points are multiplied by −1 so in the faces the critical points of −N (x) are sources and their stable manifolds are transverse to the faces.

6. Example Let us consider the case of a triangle in the plane. Since the Newton vector field is affinely invariant (Proposition 2.1) we may only consider the triangle with vertices (0, 0), (1, 0) and (0, 1). A dual description is given by the three inequalities x ≥ 0, y ≥ 0, −x − y ≥ −1 which correspond to the following data:     1 0 0     1, B =  0, A= 0 −1 −1 −1 

Remark 5.1. The last theorem implies that N (x) is

x

 D(x,y) =  0 0

Morse–Smale in the terminology of dynamical systems. Recall also that we are really interested in

0

0

y

0

0

1−x−y

  .

Newton vector field in the triangle

1

0.8

y

0.6

0.4

0.2

–0.2

0

0.2

0.4

0.6

0.8

1

x –0.2

The corresponding Newton vector field is given by the rational expressions   2 xz − x2 z + xy 2 − x2 y   z 2 + y 2 + x2   N (x, y) =    x2 y − xy 2 + yz 2 − y 2 z  z 2 + y 2 + x2

with z = 1 − x − y. This vector field is analytic on the whole plane. The singular points are the three vertices, the midpoints of the three sides and the center of gravity. The arrows in the figure are for −N (x) and the critical points are clearly sources in their faces.

April 8, 2005

17:44

01236

Newton Flow and Interior Point Methods in Linear Programming

839

Five trajectories. 1.2

1

0.8 y

0.6

0.4

0.2

–0.2

0.2

0.4

0.6

0.8

1

1.2

x –0.2

References Bayer, D. & Lagarias, J. [1989a] “The non-linear geometry of linear programming I: Affine and projective scaling trajectories,” Trans. Amer. Math. Soc. 314, 499–526. Bayer, D. & Lagarias, J. [1989b] “The non-linear geometry of linear programming II: Legendre transform coordinates and central trajectories,” Trans. Amer. Math. Soc. 314, 527–581. Bayer, D. & Lagarias, J. [1991] “Karmarkar’s linear programming algorithm and Newton’s method,” Math. Progr. A50, 291–330.

Dedieu, J.-P., Malajovich, G. & Shub, M. “On the curvature of the central path of linear programming theory,” to appear. Meggido, N. & Shub, M. [1989] “Boundary behaviour of interior point algorithms in linear programming,” Math. Oper. Res. 14, 97–146. Renegar, J. [2001] A Mathematical View of InteriorPoint Methods in Convex Optimization (SIAM, Philadelphia). Vavasis, S. & Ye, Y. [1996] “A primal-dual accelerated interior point method whose running time depends only on A,” Math. Progr. A74, 79–120.

newton flow and interior point methods in linear ...

cO World Scientific Publishing Company. NEWTON ... Theorem 5.1 that it extends to be real analytic on .... In this section we compute an analytic expression.

261KB Sizes 0 Downloads 185 Views

Recommend Documents

Particle Removal in Linear Shear Flow: Model Prediction and ...
locations in the system. It is important to test particle behavior experimentally under all conditions that may arise. Therefore, the aim of this study is to be able to predict the risk of particle detachment by modeling. For this purpose, particleâ€

INEXACT NEWTON METHODS AND DENNIS–MORÉ ...
tions and most notably variational inequalities, constraint systems, and optimality conditions in mathematical ... This author's research was supported by National Science Foundation grant DMS 1008341 through the University of Michigan. ..... conditi

Modified Newton methods and mesh adaptivity for ...
Modified Newton methods and mesh adaptivity for variational phase-field fracture propagation. Thomas Wick. ([email protected]). Institut für Angewandte Mathematik, Leibniz Universität Hannover. Welfengarten 1, 30167 Hannover, Germany

Penalized Regression Methods for Linear Models in ... - SAS Support
Figure 10 shows that, in elastic net selection, x1, x2, and x3 variables join the .... Other brand and product names are trademarks of their respective companies.

Convergence of inexact Newton methods for ...
It has effective domain domF = {x ∈. X ∣. ∣F(x) = ∅} and effective ...... Suppose that the mapping f +F is strongly metrically reg- ular at ¯x for 0 with constant λ.

Linear and Linear-Nonlinear Models in DYNARE
Apr 11, 2008 - iss = 1 β while in the steady-state, all the adjustment should cancel out so that xss = yss yflex,ss. = 1 (no deviations from potential/flexible output level, yflex,ss). The log-linearization assumption behind the Phillips curve is th

Flow Rate Optimization of a Linear Concentrating ...
model and hot water storage system model gives an overall integrated system that is use- .... fact that a higher flow rate leads to a larger mass flow rate, and.

Perspectives-In-Flow-Control-And-Optimization-Advances-In-Design ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Perspectives-In-Flow-Control-And-Optimization-Advances-In-Design-And-Control.pdf. Perspectives-In-Flow-Contr

LINEAR AND NON LINEAR OPTIMIZATION.pdf
and x1 x2 x3 ≥ 0. 10. 5. a) What is a unimodal function ? What is the difference between Newton and. Quasi Newton methods ? 6. b) Explain univariate method.

Determinantal Singularities and Newton Polyhedra
Indeed, by definition, the Newton polyhedron of a germ. ∑ a∈Zm caxa ..... Theorem 1.19. If Bi,j, i = 1,...,n, j = 1,...,k, are bounded integer polyhedra or bounded.

Newton HS.pdf
Date Signature. Page 1 of 1. Newton HS.pdf. Newton HS.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Newton HS.pdf.

pdf-14104\zebrafish-methods-and-protocols-methods-in-molecular ...
... one of the apps below to open or edit this item. pdf-14104\zebrafish-methods-and-protocols-methods-in-molecular-biology-from-brand-humana-press.pdf.

pdf-14104\clostridium-difficile-methods-and-protocols-methods-in ...
... of the apps below to open or edit this item. pdf-14104\clostridium-difficile-methods-and-protocols-methods-in-molecular-biology-from-humana-press.pdf.

Likelihood Methods for Point Processes with ...
neural systems (Paninski, 2004; Truccolo et al., 2005) and to develop ...... sociated ISI distribution for the three examples: 1) homogeneous Poisson process with.

pdf-1295\computational-methods-for-linear-integral ...
Try one of the apps below to open or edit this item. pdf-1295\computational-methods-for-linear-integral-equations-by-prem-kythe-pratap-puri.pdf.