Proceedings of the Princeton Symposium on Mathematical Programming

Proceedings of the Princeton Symposium on Mathematical Programming

by Harold William Kuhn
     
 

View All Available Formats & Editions

This volume contains thirty-three selected general research papers devoted to the theory and application of the mathematics of constrained optimization, including linear programming and its extensions to convex programming, general nonlinear programming, integer programming, and programming under uncertainty.

Originally published in 1971.

The Princeton

Overview

This volume contains thirty-three selected general research papers devoted to the theory and application of the mathematics of constrained optimization, including linear programming and its extensions to convex programming, general nonlinear programming, integer programming, and programming under uncertainty.

Originally published in 1971.

The Princeton Legacy Library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These paperback editions preserve the original texts of these important books while presenting them in durable paperback editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.

Product Details

ISBN-13:
9780691620732
Publisher:
Princeton University Press
Publication date:
03/08/2015
Series:
Princeton Legacy Library Series
Pages:
628
Product dimensions:
6.70(w) x 9.90(h) x 1.40(d)

Related Subjects

Read an Excerpt

Proceedings of the Princeton Symposium on Mathematical Programming


By Harold W. Kuhn

PRINCETON UNIVERSITY PRESS

Copyright © 1970 Princeton University Press
All rights reserved.
ISBN: 978-0-691-08088-8



CHAPTER 1

TWO METHODS OF DECOMPOSITION FOR LINEAR PROGRAMMING

J. Abadie and M. Sakarovitch

I. Introduction

Consider a linear program having the following structure:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where x is an n1, vector, y is an n2-vector, ..., z is an nt-vector

p is an m1-vector, q is an m2-vector, ..., R is an mt-vector

h is an m-vector

P is an m1 x n1, matrix, Q is an m2 x n2 matrix, ..., R is an mt x nt matrix

A is an m x n1, matrix, B is an m x n2 matrix, ..., C is an m x nt matrix

c is an n1 row-vector, d is an n2 row-vector, ..., e is an nt row-vector.


Such linear programs are frequently met. They may represent, for instance, a decentralized economy where scarce resources have to be shared by otherwise independent units. The principle of decomposition of Dantzig and Wolfe [1] and [2] replaces the solution of such a structured linear program by the successive solution of several linear programs of limited size.

The principle of the methods to be presented here will be sketched, without loss of generality, on the case t = 2:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The initial idea of these methods consists in partitioning the "scarce resource vector" h into a + b = h, and in solving the subproblems:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Clearly there exist "good" partitions of h such that the union of solutions of the two subproblems consitutes a solution of the initial problem. Both methods consist in the search of such a good partition of h.

It is of interest to note here that Dantzig and Wolfe's principle of decomposition was discovered as an extension of a method by Ford and Fulkerson to solve the multicommodity flow problem. Similarly the basic idea of the present methods, or at least of method I, is an extension of an alternative approach to this same multicommodity flow problem.

Notations: If X is a set, |X| will denote the cardinality of this set, i. e., the number of its elements.

Let

N1 = {1, ..., n1}, N2 = {1, ..., n2};

we will have:

I [subset] N1, J [subset] N2.

If a denotes a column vector, c a row vector, and A a matrix:

aI will denote the |I|-vector the components of which are: ai, [for all] i [member of] I

bI will denote the |I|-row-vector the components of which are bi, [for all] i [member of] I

AI will denote the restriction of matrix A to columns the indices of which belong to I.

AJ will denote restriction of matrix A to .rows the indices of which are in J.

AIJ will denote the restriction of matrix A to the matrix of elements Aij for i ε I and j ε J.

(P'(I)) will denote the restriction of problem (P') to its I-column, i. e.:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and similarly for (P"(I)).

(P(I, J)) will denote the restriction of problem (P) to its I [union] J columns, i. e.:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

At each stage of the process, a partition of h into a + b = h is known, and the corresponding optimal solutions of (P') and (P") are computed. I (resp. J) denotes a subset of the set of basic columns in (P') (resp. (P")). Then the process can be decomposed into the two basic steps:

Step 1: Alter, if necessary, the present partition of vector h in such a way that the optimal solution of (P(I, J)) (which is in both cases the restriction of (P) to the set of columns I [union] J) be obtained as the union of the solutions of (P') and (P").

Step 2: If the optimal solution of the restricted problem is not an optimal solution of (P), redefine the restricted problem by introducing into I [union] J a column such that the new restricted problem has a better optimal solution (better in the sense that the objective function decreases).

Remark: For a partition of vector h corresponding to a basic optimal solution of (P), (P') and (P") have together m basic variables which are zero for their optimal solution if (P) is nondegenerate.

This is true because at an optimal basic solution of (P), if no degeneracy occurs, exactly + m1 + m2 variables xi and yj will be positive; but this constitutes also an optimal solution for (P') and (P") which contain together m1 + m2 + 2m rows.

The first method is presented in part I, the second method in part II. These parts are independent and can be read in any order. A third part sketches an economic interpretation of these methods.


FIRST PART: Method I

After some preliminary remarks, an algorithm (algorithm I) will be given; finally, its finite convergence will be proved for a nondegenerate problem (P).

1. Preliminaries: Suppose that, at some step, a certain partition a + b = h has been found, that the subproblems (P') and (P") have been solved; let [bar.x], [bar.y] be these solutions, and let:

I = {i|[bar.x]i > 0}, J = {j|[bar.y]j > 0}.

Lemma 1. If:

(i) [bar.x]I, [bar.y]J are respectively optimal solutions of (P'(I)) and (P"(J)),

(ii) [bar.u], [bar.u]' and [bar.v], [bar.v]' are corresponding optimal simplex multipliers,

(iii) [bar.u]' = [bar.v]',

then [bar.x]I, [bar.y]J is an optimal solution of (P(I, J)).


Proof: Straightforward, applying the usual Kuhn-Tucker optimality condition for linear programming.

The remark at the end of the introduction induces us to think that the closer we will be from a "good" partition of h, the more degenerate the subproblems will be. But if (P') and (P") are degenerate, the value of the optimal simplex multipliers [bar.u]' and [bar.v]' will not be uniquely defined. We construct now a problem (Q) which will give to us optimal simplex multipliers of (P'(I)) and (P"(J2)) such that the distance between the vectors [bar.u]' and [bar.v]' (in a L2norm)2 is minimized:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Lemma 2. Solving (Q) is equivalent to solving the linear

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where λ [member of] RI, μ [member of] RJ.

Proof: Straightforward, using the Lagrangian

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If at some step of the algorithm we solve (S) and find a solution: [bar.λ, [bar.μ], [bar.u]', [bar.v]', [bar.u], [bar.v], two cases may occur:

(1) [bar.u]' = [bar.v]'; then, from Lemma l, [bar.x], [bar.y] is an optimal solution of P(I, J).

(2) [bar.u]' ≠ [bar.v]'; then [bar.λ] 0, [bar.μ] = 0; we define:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

Let [bar.θ] be the supremum of θ such that [bar.x]I λθ ≥ 0, [bar.y]J - μθ [greater than or equal to] 0.

Lemma 3. [bar.x]I - λ[bar.θ], [bar.y]J μ[bar.θ] is a feasible solution to P(I, J), strictly better than [bar.x]I, [bar.y]J for all θ satisfying 0 ≤ θ ≤ [bar.θ], if 0 < [bar. theta]] < + ∞; for all θ satisfying θ > 0 if [bar.θ] = + ∞. In the latter case, the infimum of the objective function in Problem P is - ∞.

Proof : Feasibility is easily checked by substitution into the constraints of (P(I, J)) and use of (1), (2), (3), (4).

Moreove r, ( 5), (1), (3) imply in success ion

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

similarly: J- - - - T

dJ [bar.μ] = [bar.v]' ([bar.v]' - [bar.u]')T.

We have then, for any θ in the range specified in the Lemma:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The conclusion follows from the assumption [bar.u]' ≠ [bar.v]'.

2. Algorithm I:

Step 0: Let a + b = h be an arbitrary partition, except that the subproblems (P') and (P") have both feasible solutions and are not degenerate.

Solve (P') and (P").

(0.1) If no optimal solution exists for some subproblem, then the minimum value of f is - ∞: terminate.

(0.2) If both subproblems have an optimal solution, then let [bar.x] and [bar.y] be respectively a basic optimal solution for (P') and for (P"). Let I = {i| [bar.x]i. > 0}, J = {j|[bar.y]j > 0}. Go to step 1.

Step 1: Solve s ystem (S). Let [bar.λ], [bar.μ], [bar.u]', [bar.v]', [bar.u], [bar.v] be the solution.

-(1. 1) If [bar.u]' = [bar.v]', go to step 2.

-(1. 2) If [bar.u]' ≠ [bar.v]' define [increment of x] and [increment of y] through (7). Let [bar.θ] be the supremum of θ such that [bar.x] + [increment of x] ≥ 0, [bar.y] + [incrememnt of y] [greater than or equal to] 0.

-(1. 2.1) If [bar.θ] = ∞, the infimum of f in problem (P) is - ∞: terminate.

-(1. 2. 2) If [bar.θ] < ∞ replace [bar.x], [bar.y] by [bar.x] - λ[bar.θ], [bar.y] - μ[bar.θ], and set I = {i|[bar.x]i > 0}, J = {j|[bar.y]j > 0}. End of iteration. Begin next iteration at step 1.

Step 2: Let γ = Mini [not epsilon] I {ci - [bar.u] Pi - [bar.u]'Ai}

δ = = Minj [not epsilon] J {dj - [bar.v] Qj - [bar.v]'Bj}.

- (2.1) If Min{{[gamma, δ} ≥ 0; terminate: [bar.x], [bar.y] is an optimal solution to problem (P).

- (2.2)a) If γ = cs - [bar.u] Ps - [bar.u]' As ≤ δ and γ < 0, then set I = I [union] {s}. End of iteration. Begin next iteration at step (1. 1).


3. Justification of the algorithm:

Lemma 4. [bar.u]' = [bar.v]' in (1. 1) if and only if the system:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

has a solution.

Proof : From Lemma 2, (S) is equivalent to (Q). (Q) has a solution with wmin. 0 if and only if (S ') has a solution.

Lemma 5. In step (1. 2. 2) , one and only one variable xi or yj becomes zero for θ = [bar.θ].

Proof: Obvious, because of the nondegeneracy assumption on problem (P).

Lemma 6. The system (S') either is infeasible or has a unique solution.

Proof: At the very first stage of the algorithm, because of the nondegeneracy as sumption on pr oblems (P') and (P") at step (0), (5) and (6) uniquely dete rmine u, u', v, v'.

Afterwards, either one equation (and just one from Lemma 5) is deleted from the system (S') if it is infeasible, or a nonredundant constraint is added in step 2 if it is feasible.

Lemma 7. If at some nonterminal stage of the algorithm we pass through (1.1), i. e. , if [bar.u]' = [bar.v]', then at the next stage we have [bar.u]' ≠ [bar.v]', i. e., we pas s through (1. 2).

Proof: If at some stage [bar.u]' = [bar.v]', then the system (S') is feasible; in addition the solution is unique from Lemma 6.

At the next stage, after the addition of a nonredundant constraint in step (2.1), the system (S') must be infeasible and then, from Lemma 4, [bar.u]' ≠ [bar.v]'.

Lemma 8. Let f be a convex function defined on a convex set C, and let D be a convex body (convex set the interior of which is not empty), such that the intersection C [intersection] D is not empty. Define the two problems:

I. minimize f (x), subject to x [member of] C

II. minimize f (x), subject to x [member of] C [intersection] D.


Suppose first that the algorithm ends up in Step (2. 1). Since [bar.u]' = [bar.v]', the solution is optimal for (P(I, J)) (see Lemma 1); since (γ, δ) ≥ 0, the solution is optimal for (P) (apply the usual optimality criterion for linear programming).

Suppose next that the algorithm ends up in Step (1. 2. 1). We have found a feasible (from Lemma 3) direction of change for x and y, which is not bounded, and which gives an infinite decrease in the objective function. The same result is true if the algorithm ends in Step (0. 1).

Suppose we cycle indefinitely: we would pass an infinite number of times through step (1. 2). From Lemma 9, at each passage through step (1. 2), [bar.θ] > 0, and then either I or J decreases by one element. Thus we must also pas s an infinite number of times through step (2). Thus there will be two pas s ages in step (2) with the same sets of indices I, J. From one pas sage to the next the objective function would have decreased (see Lemmas 3, 7, 9) : this is a contradiction, since at each pas s age we have an optimum for (P(I, J)), from Lemma 1.


4. Applications of the algorithm:

There is no conceptual difficulty for applying this algorithm to the case of more than two subproblems. For instance, for an initial problem with three subproblems:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

the system (S) would write:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]


We see that the drawback of this method is that in general system (S) is of a very big size (this is true even with only two problems). If t is the number of subproblems, the size of (S) is:

Let Problem I have a unique solution xI, and let xII be a solution of II. If xI [not member of] D, then:

(i) f (xI) < f (xII);

(ii) xII belongs to the boundary of D.


Proof: (i) being obvious, it remains t o prove (ii). Suppose xII be an interior point of D, and let V be an open neighborhood of xII excluding any boundary point of D. The point xII is optimal for the following problem:


III. minimize f(x), subject to x [member of] C [intersection] v.

Since C [intersection] V is a set relatively open in C, it follows that xII is locally optimal for Problem I; by convexity, it follow s that xII is globally optimal for Problem I: this is a contradiction to the uniqueness of the solution of Problem I.

Lemma 9. [bar.θ] > 0 at every pas sage through Step (1. 2).

Proof: Suppose that, at some passage in Step (1. 2), we have [bar.θ] = 0.


(Continues...)

Excerpted from Proceedings of the Princeton Symposium on Mathematical Programming by Harold W. Kuhn. Copyright © 1970 Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Customer Reviews

Average Review:

Write a Review

and post it to your social network

     

Most Helpful Customer Reviews

See all customer reviews >