Optimization Theory with Applications

Optimization principles are of undisputed importance in modern design and system operation. They can be used for many purposes: optimal design of systems, optimal operation of systems, determination of performance limitations of systems, or simply the solution of sets of equations. While most books on optimization are limited to essentially one approach, this volume offers a broad spectrum of approaches, with emphasis on basic techniques from both classical and modern work.
After an introductory chapter introducing those system concepts that prevail throughout optimization problems of all types, the author discusses the classical theory of minima and maxima (Chapter 2). In Chapter 3, necessary and sufficient conditions for relative extrema of functionals are developed from the viewpoint of the Euler-Lagrange formalism of the calculus of variations. Chapter 4 is restricted to linear time-invariant systems for which significant results can be obtained via transform methods with a minimum of computational difficulty. In Chapter 5, emphasis is placed on applied problems which can be converted to a standard problem form for linear programming solutions, with the fundamentals of convex sets and simplex technique for solution given detailed attention. Chapter 6 examines search techniques and nonlinear programming. Chapter 7 covers Bellman's principle of optimality, and finally, Chapter 8 gives valuable insight into the maximum principle extension of the classical calculus of variations.
Designed for use in a first course in optimization for advanced undergraduates, graduate students, practicing engineers, and systems designers, this carefully written text is accessible to anyone with a background in basic differential equation theory and matrix operations. To help students grasp the material, the book contains many detailed examples and problems, and also includes reference sections for additional reading.

1000117277
Optimization Theory with Applications

Optimization principles are of undisputed importance in modern design and system operation. They can be used for many purposes: optimal design of systems, optimal operation of systems, determination of performance limitations of systems, or simply the solution of sets of equations. While most books on optimization are limited to essentially one approach, this volume offers a broad spectrum of approaches, with emphasis on basic techniques from both classical and modern work.
After an introductory chapter introducing those system concepts that prevail throughout optimization problems of all types, the author discusses the classical theory of minima and maxima (Chapter 2). In Chapter 3, necessary and sufficient conditions for relative extrema of functionals are developed from the viewpoint of the Euler-Lagrange formalism of the calculus of variations. Chapter 4 is restricted to linear time-invariant systems for which significant results can be obtained via transform methods with a minimum of computational difficulty. In Chapter 5, emphasis is placed on applied problems which can be converted to a standard problem form for linear programming solutions, with the fundamentals of convex sets and simplex technique for solution given detailed attention. Chapter 6 examines search techniques and nonlinear programming. Chapter 7 covers Bellman's principle of optimality, and finally, Chapter 8 gives valuable insight into the maximum principle extension of the classical calculus of variations.
Designed for use in a first course in optimization for advanced undergraduates, graduate students, practicing engineers, and systems designers, this carefully written text is accessible to anyone with a background in basic differential equation theory and matrix operations. To help students grasp the material, the book contains many detailed examples and problems, and also includes reference sections for additional reading.

22.95 In Stock
Optimization Theory with Applications

Optimization Theory with Applications

by Donald A. Pierre
Optimization Theory with Applications

Optimization Theory with Applications

by Donald A. Pierre

eBook

$22.95 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

Optimization principles are of undisputed importance in modern design and system operation. They can be used for many purposes: optimal design of systems, optimal operation of systems, determination of performance limitations of systems, or simply the solution of sets of equations. While most books on optimization are limited to essentially one approach, this volume offers a broad spectrum of approaches, with emphasis on basic techniques from both classical and modern work.
After an introductory chapter introducing those system concepts that prevail throughout optimization problems of all types, the author discusses the classical theory of minima and maxima (Chapter 2). In Chapter 3, necessary and sufficient conditions for relative extrema of functionals are developed from the viewpoint of the Euler-Lagrange formalism of the calculus of variations. Chapter 4 is restricted to linear time-invariant systems for which significant results can be obtained via transform methods with a minimum of computational difficulty. In Chapter 5, emphasis is placed on applied problems which can be converted to a standard problem form for linear programming solutions, with the fundamentals of convex sets and simplex technique for solution given detailed attention. Chapter 6 examines search techniques and nonlinear programming. Chapter 7 covers Bellman's principle of optimality, and finally, Chapter 8 gives valuable insight into the maximum principle extension of the classical calculus of variations.
Designed for use in a first course in optimization for advanced undergraduates, graduate students, practicing engineers, and systems designers, this carefully written text is accessible to anyone with a background in basic differential equation theory and matrix operations. To help students grasp the material, the book contains many detailed examples and problems, and also includes reference sections for additional reading.


Product Details

ISBN-13: 9780486136950
Publisher: Dover Publications
Publication date: 06/14/2012
Series: Dover Books on Mathematics
Sold by: Barnes & Noble
Format: eBook
Pages: 640
File size: 47 MB
Note: This product may take a few minutes to download.

Read an Excerpt

CHAPTER 1

INTRODUCTION

1

1-1. OPTIMIZATION IN PERSPECTIVE

When an individual is confronted with a problem, he must progress through an alternating sequence of evaluations and decisions. Greber lists six cardinal steps on which evaluations and decisions are made in the solution of engineering problems, namely,

1. Recognition of need.

2. Formulation of the problem.

3. Resolving the problem into concepts that suggest a solution.

4. Finding elements for the solution.

5. Synthesizing the solution.

6. Simplifying and optimizing the solution.

The order in which these steps are followed can differ considerably from one problem to another. Insight gained at any given step may be employed to modify conclusions of other steps: we should visualize a set of feedback paths which allow transition from any step to any preceding step in accordance with the dictates of a given problem. For example, the step of "synthesizing the solution" or the "formulation of the problem" may be modified or augmented by considerations associated with "simplifying and optimizing the solution."

Problems are generally associated with physical things; without a thorough understanding of the physical principles upon which a given problem solution depends, the application of optimization principles is of dubious value. There is no substitute for knowledge of physical principles and devices, nor is there any substitute for an inventive idea. The ideal role that optimization plays in the solution of problems is evidenced in the following statement: After constraints that must be satisfied by the problem solution are defined, either directly or indirectly, all significant forms of solution which satisfy the constraints should be conceived; and from the generally infinite number of such solutions, the one or ones which are best under some criteria of goodness should be extracted by using optimization principles. As with most ideals, the ideals of optimization are not easily achieved: the identification of all significant forms of solution to a given problem can be accomplished in special cases only, and limitations on time available to produce the solution to a given problem are always present. Thus, the good designer or manager does the best that he can, all factors considered.

Problems which involve the operation or the design of systems are generally of the type to which optimization principles can be beneficially applied. Moreover, problems of analysis can be viewed as optimization problems, albeit trivial ones; for example, if a linear circuit is given with specified voltage and current sources and specified initial conditions, the problem of finding the current distribution in the circuit as a function of time admits to a unique solution, and we could say in such cases that the unique solution is the optimal solution.

Whenever we use "best" or "optimum" to describe a system, the immediate question to be asked is, "Best with respect to what criteria and subject to what limitations?" Given a specific measure of performance and a specific set of constraints, we can designate a system as optimum (with respect to the performance measure and the constraints) if it "performs" as well as, if not better than, any other system which satisfies the constraints. The term suboptimum is used to describe any system which is not optimum (with respect to the given performance measure and constraints). Specific uses of the term suboptimum vary. It can be used in reference to systems which are not optimum because of parameter variations, or in reference to systems which are not optimum because they are designed to satisfy additional constraints, or in reference to any system which is to be compared to a reference optimum one.

Great advances have been made in optimization theory since 1940. For example, almost all of the material in the last five chapters of this book has been developed since that time. In the words of Athans, "At the present time, the field of optimization has reached a certain state of maturity, and it is regarded as one of the areas of most fervent research." The one factor that has influenced this rapid growth of optimization theory more than any other has been the parallel development of computer equipment with which optimization theory can be applied to broad classes of problems. In the remainder of this chapter, we examine the general nature of problems that are treated and those aspects of optimization that prevail throughout this book.

1-2.THE CONCEPTS OF SYSTEM AND STATE

The concept of system is fundamental to optimization theory. We can speak of systems of equations or of physical systems, both of which fall under the broad meaning of the term system. An important attribute of any system is that it be describable, perhaps only approximately and perhaps with probabilistic data included in the description. Systems are governed by rules of operation; systems generally receive inputs; and systems exhibit outputs which are influenced by inputs and by the rules of operation. Concisely put, "A system is a collection of entities or things (animate or inanimate) which receive certain inputs and is constrained to act concertedly upon them to produce certain outputs with the objective of maximizing some function of the inputs and outputs." Although the latter part of this definition tends to be restrictive, the flavor which it adds is in keeping with the objectives of this book.

In addition to inputs, outputs, and rules of operation, systems generally require the concept of state for complete description. To bring the concept of state into focus, suppose that the characterizing equations of a system are known and that outputs are to be determined which result from a set of deterministic inputs and/or changes in the system after a certain time t. To accurately predict the response (outputs) of the system after time τ, knowledge of the state of the system at time τ is the additional information required. The following examples will serve to clarify the preceding statements.

Consider the output represented by x [equivalent] x(t) which is associated with the system characterized by

x + ax + bx = m(t) (1-1)

where a and b are constants, m(t) represents an input, and x equals dx/dt. To compute x(t) for τ greater than t, knowledge of m(t) for t greater than τ and knowledge of values of x(τ) and x(τ) are sufficient. In this case, x and x are a set of state variables for the system. That the state variables of systems can be specified in an unlimited number of ways is illustrated as follows:

Let x1 and x2 be defined by

[MATHEMATICAL EXPRESSION OMITTED] (1-2)

and

[MATHEMATICAL EXPRESSION OMITTED] (1-3)

where the aij's are arbitrary constants that satisfy

[MATHEMATICAL EXPRESSION OMITTED] (1-4)

The variables x1 and x2 could be defined as state variables for the system of Equation 1-1 because they contain the same information about the system as do x and x — in fact, x and x can be found in terms of x1 and x2.

In the preceding example, the state of the system at an arbitrary instant t = τ is given by two numbers. For many systems, a finite set of numbers suffices for the state description at any instant t, but not all systems are so characterized. For example, consider the differential-difference equation

x(t) + x(t -1 ) = m(t) (1-5)

and suppose that m(t) is known for t ≥ τ. To compute x(t) for t > τ, knowledge of x(t) is required in the closed interval defined by τ - 1 ≤ t ≤ τ. Thus, a continuum of values of x is the state of this system at any time τ.

There are many classes of systems, some of which are of such fundamental importance that, to certain people who work with a particular class, the word "system" means essentially their class of system. Some of the more important categorizations of systems are summarized as follows:

There are zero-memory systems wherein inputs determine outputs directly without any need for the concept of state. There is the classification of linear system versus nonlinear system, the linear system being described in terms of strictly linear equations, such as linear algebraic or linear differential equations. There is the categorization of discrete system versus continuous system, the former being generally describable in terms of difference equations, whereas the latter is typically describable in terms of either ordinary or partial differential equations. There is the concept of feedback in systems wherein control actions are influenced by state variables. There is the concept of adaptation in systems wherein the structure of one part of a system is intentionally changed as a consequence of changes in other parts of the system structure or as a consequence of differences in inputs. There is the concept of learning in systems wherein effects of past control actions are weighted in decisions pertaining to future control actions. There is the classification of multivariable systems with multiple inputs and/or interdependent outputs. There is the classification of stochastic systems in which system inputs and/or parameters assume random properties. There are so- called logic systems in which inputs and outputs are stratified at discrete levels (the most common being two-level logic) — and logic systems are subdivided into either combinational logic systems (of the zero-memory type) or sequential logic systems (in which state variables exist).

For zero-memory systems, the concept of state has no obvious meaning; but it will be observed in the programming chapters (5, 6, and 7) that certain methods of solving problems associated with zero-memory systems lead to an analogous state-type concept where, because of the sequential nature of a given solution, the state of the solution is important. In fact, we can view these solution algorithms as systems in themselves.

1-3.PERFORMANCE MEASURES

To design or plan something so that it is best in some sense, is to use optimization. The sense in which the something (e.g., a system) is best, or is to be best, is a very pertinent factor. The term performance measure is used here to denote that which is to be maximized or minimized (to be extremized). Other terms are used in this regard, e.g., performance index, performance criterion, cost function, return function, and figure of merit.

If a performance measure and system constraints are clearly evident from the nature of a given problem, the sense in which the corresponding system is to be optimum is objective, rather than subjective. On the other hand, if the performance measure and/or system constraints are partially subjective, a matter of personal taste, then the attainment of an "optimal" design is also subjective. In many cases, subjective criteria are more prevalent than are the objective. In the words of Churchman, "Probably the most startling feature of twentieth-century culture is the fact that we have developed such elaborate ways of doing things and at the same time have developed no way of justifying any of the things we do." The preceding comment should not be interpreted in its strictest sense, for as noted by Hall, "If an objective (a goal) cannot be well defined it probably is not worth much, and the problem may as well be simplified by eliminating it." The process through which we proceed to set good goals can be quite challenging [1.7, and Chapter 13 of 1.4].

Examples of partially subjective conditions serve to clarify the preceding statements. In the first place, suppose that we wish to minimize the magnitude of a system error e = e(t) which is a function of time and is characterized by

[MATHEMATICAL EXPRESSION OMITTED] (1-6)

where ta and tb are specified, e(ta) is given, q [equivalent] q(e,m, t) is a given real-valued function of its arguments, and m [equivalent] m(t) is a bounded control action which is to be selected to attain minimum error over the time period defined by tattb. Because of the differential equation characterization of e(t), e(t) at one instant of time is dependent on e(t) at other instants of time so that the sense in which the magnitude of e(t) is to be minimized is not obvious. We may choose to minimize the integral of the absolute value of the error over the closed interval [ta, tb]; we may choose to minimize the integral of the squared error; or we may choose to minimize some other integral of a positive-definite function of the system error. Commonly, the form of the "optimum" system depends on the particular performance measure employed. Note that any one of the just-noted performance measures is a function of the error e(t); that is, these performance measures are functions of a function. The term functional is used to denote a function which maps a function of independent variables into scalar values.

As a second example of partially subjective conditions, consider two companies which design, manufacture, and sell similar products. Company A has a reputation for producing quality products which withstand the harshest of treatment. Company B on the other hand has a reputation for producing good but not quite so durable products with a lower price than similar products of company A. If an employee changes jobs and shifts from company B to company A, his subjective criteria for "goodness" of a product must also change if he is to be successful in working for company A.

A performance measure is not necessarily a single entity, such as cost, but may represent a weighted sum of various factors of interest: e.g., cost, reliability, safety, economy of operation and repair, accuracy of operation, size and weight, user comfort and convenience, etc. To illustrate this point, consider a case where cost C and reliability R are the factors to be included in the performance measure (other factors of interest are generally required to satisfy constraint relationships; constraint considerations are examined in the following section). It is desired that C be small but R be large. Suppose the performance measure P is

[MATHEMATICAL EXPRESSION OMITTED] (1-7)

where w1 is a dimensionless, positive weighting factor; w2 is a negative weighting factor with an appropriate monetary dimension; and P is to be minimized with respect to allowable functions or parameters (those which satisfy all constraints imposed on the functions or parameters) upon which C and R depend. Note that w2 is assigned a negative value; the reason for this can be gleaned from the limiting case where w is assigned the value of zero, in which case the minimum of w2R corresponds to the maximum of R, which is desired, because w2 is negative. Analytically,

maximum R = -minimum (-R) (1-8)

In general, the factors that are deemed more important in a given case should be weighted more heavily in the associated performance measure. This weighting process is partially subjective if strictly objective criteria are not evident — because of this, results obtained by using optimization theory should be carefully examined from the standpoint of overall acceptability.

1-4. CONSTRAINTS

Any relationship that must be satisfied is a constraint. Constraints are classified either as equality constraints or as inequality constraints. Arguments of constraint relationships are related in some well-defined fashion to arguments of corresponding performance measures. Thus, if a particular performance measure depends on parameters and functions to be selected for the optimum, the associated constraints depend, either directly or indirectly, on at least some of the same parameters and functions. Constraints limit the set of solutions from which an optimal solution is to be found.

If certain parts of a system are fixed, the equations which characterize the interactions of these fixed parts are constraint equations. For example, a control system is usually required to control some process; the dynamics of the process to be controlled are seldom at the discretion of the control system designer and, therefore, are constraints with which he must contend. Likewise, if a system is but a part of a larger system, in which case the former may be referred to as a subsystem, the larger system may impose constraints on the subsystem; e.g., only certain specific power sources may be available to the subsystem, or the subsystem may be required to fit into a limited space and to weigh no more than a specified amount, or we may be required to design the subsystem using only a limited set of devices because of some a priori decisions made in regard to the larger system.

(Continues…)



Excerpted from "Optimization Theory With Applications"
by .
Copyright © 1986 Donald A. Pierre.
Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

1 INTRODUCTION,
2 CLASSICAL THEORY OF MINIMA AND MAXIMA,
3 CLASSICAL CALCULUS OF VARIATIONS,
4 WIENER-HOPF SPECTRUM FACTORIZATION AND FREQUENCY-DOMAIN OPTIMIZATION,
5 THE SIMPLEX TECHNIQUE AND LINEAR PROGRAMMING,
6 SEARCH TECHNIQUES AND NONLINEAR PROGRAMMING,
7 A PRINCIPLE OF OPTIMALITY AND DYNAMIC PROGRAMMING,
8 A MAXIMUM PRINCIPLE,
APPENDICES,
A. MATRIX IDENTITIES AND OPERATIONS,
B. TWO-SIDED LAPLACE TRANSFORM THEORY,
C. CORRELATION FUNCTIONS AND POWER-DENSITY SPECTRA,
D. INEQUALITIES AND ABSTRACT SPACES,
AUTHOR INDEX,
SUBJECT INDEX,

From the B&N Reads Blog

Customer Reviews