 Shopping Bag ( 0 items )

All (9) from $79.98

New (7) from $98.89

Used (2) from $79.98
More About This Textbook
Overview
Nonlinear Dynamical Systems and Control presents and develops an extensive treatment of stability analysis and control design of nonlinear dynamical systems, with an emphasis on Lyapunovbased methods. Dynamical system theory lies at the heart of mathematical sciences and engineering. The application of dynamical systems has crossed interdisciplinary boundaries from chemistry to biochemistry to chemical kinetics, from medicine to biology to population genetics, from economics to sociology to psychology, and from physics to mechanics to engineering. The increasingly complex nature of engineering systems requiring feedback control to obtain a desired system behavior also gives rise to dynamical systems.
Wassim Haddad and VijaySekhar Chellaboina provide an exhaustive treatment of nonlinear systems theory and control using the highest standards of exposition and rigor. This graduatelevel textbook goes well beyond standard treatments by developing Lyapunov stability theory, partial stability, boundedness, inputtostate stability, inputoutput stability, finitetime stability, semistability, stability of sets and periodic orbits, and stability theorems via vector Lyapunov functions. A complete and thorough treatment of dissipativity theory, absolute stability theory, stability of feedback systems, optimal control, disturbance rejection control, and robust control for nonlinear dynamical systems is also given. This book is an indispensable resource for applied mathematicians, dynamical systems theorists, control theorists, and engineers.
Editorial Reviews
Mathematical Reviews  Olusola Akinyele
The book is lucid and well written and contains numerous worked examples for specific applications to important classes of systems as well as numerous problems and suggestions for further study at the end of the main chapters. This book will be an excellent source of reference materials for graduate students of applied mathematics, control theorists and engineers studying the stability theory of dynamical systems and controls. It will also be a rich source of materials for self study by researchers and practitioners interested in systems theory of engineering, controls, computer science, chemistry, life sciences and economics.From the Publisher
Wassim Haddad, Winner of the 2014 Pendray Aerospace Literature Award, American Institute of Aeronautics and Astronautics"The book is lucid and well written and contains numerous worked examples for specific applications to important classes of systems as well as numerous problems and suggestions for further study at the end of the main chapters. This book will be an excellent source of reference materials for graduate students of applied mathematics, control theorists and engineers studying the stability theory of dynamical systems and controls. It will also be a rich source of materials for self study by researchers and practitioners interested in systems theory of engineering, controls, computer science, chemistry, life sciences and economics."—Olusola Akinyele, Mathematical Reviews
Mathematical Reviews
The book is lucid and well written and contains numerous worked examples for specific applications to important classes of systems as well as numerous problems and suggestions for further study at the end of the main chapters. This book will be an excellent source of reference materials for graduate students of applied mathematics, control theorists and engineers studying the stability theory of dynamical systems and controls. It will also be a rich source of materials for self study by researchers and practitioners interested in systems theory of engineering, controls, computer science, chemistry, life sciences and economics.— Olusola Akinyele
Product Details
Related Subjects
Meet the Author
Wassim M. Haddad is professor of aerospace engineering at the Georgia Institute of Technology. VijaySekhar Chellaboina is associate professor of mechanical, aerospace, and biomedical engineering at the University of Tennessee. They are the coauthors of "Impulsive and Hybrid Dynamical Systems: Stability, Dissipativity, and Control" and "Thermodynamics: A Dynamical Systems Approach" (both Princeton).
Read an Excerpt
By Wassim M. Haddad VijaySekhar Chellaboina Princeton University Press
Copyright © 2008 Princeton University Press
All right reserved.
ISBN: 9780691133294
Chapter One Introduction
A system is a combination of components or parts that is perceived as a single entity. The parts making up the system may be clearly or vaguely defined. These parts are related to each other through a particular set of variables, called the states of the system, that completely determine the behavior of the system at any given time. A dynamical system is a system whose state changes with time. Specifically, the state of a dynamical system can be regarded as an information storage or memory of past system events. The set of (internal) states of a dynamical system must be sufficiently rich to completely determine the behavior of the system for any future time. Hence, the state of a dynamical system at a given time is uniquely determined by the state of the system at the initial time and the present input to the system. In other words, the state of a dynamical system in general depends on both the present input to the system and the past history of the system. Even though it is often assumed that the state of a dynamical system is the least set of state variables needed to completely predict the effect of the past upon the future of the system, this is often a convenient simplifying assumption.
We regard a dynamical system Gas a mathematical model structure involving an input, state, and output that can capture the dynamical description of a given class of physical systems. Specifically, at each moment of time t [member of] T, where T denotes a timeordered subset of the reals, the dynamical system G receives an input u(t)(e.g., matter, energy, information) and generates an output y(t). The values of the input are taken from the fixed set U. Furthermore, over a time segment the input function u :[t.sub.1], [t.sub.2]) [right arrow] U is not arbitrary but belongs to the admissible input class U, that is, for every u(·) [member of] U and t [member of] T, u(t) [member of] U. The input class U depends on the physical description of the system. In addition, each system output y(t) belongs to the fixed set Y with y(·) [member of] y over a given time segment, where Y denotes an output space. In general, the output of G depends on both the present input of G and the past history of G. Thus, the state, and hence the output at some time t [member of] T, depends on both the initial state x([t.sub.0])= [x.sub.0] and the input segment u :[t.sub.0], t) [right arrow] U. In other words, knowledge of both [x.sub.0] and u [member of] U is necessary and sufficient to determine the present and future state x(t)= s(t, [t.sub.0], [x.sub.0], u) of G.
In light of the above discussion, we view a dynamical system as a precise mathematical object defined on a time set as a mapping between vector spaces satisfying a set of axioms. A mathematical dynamical system thus consists of the space of states D of the system together with a rule or dynamic that determines the state of the system at a given future time from a given present state. This is formalized by the following definition. For this definition T = R for continuoustime systems and T = Z for discretetime systems.
Definition 1.1. A dynamical system G on D is the octuple (D, U, U, Y, Y, T, s, h), where s : T x T x D x U [right arrow] D and h: T x D x U [right arrow] Y that the following axioms hold:
i) (Continuity): For every [t.sub.0] [member of] T, [x.sub.0] [member of] D, and u [member of] U, s(·, [t.sub.0], [x.sub.0], u) is continuous for all t [member of] T.
ii) (Consistency): For every [x.sub.0] [member of] D, u [member of] U, and [t.sub.0] [member of] T, s([t.sub.0], [t.sub.0], [x.sub.0], u)= [x.sub.0].
iii) (Determinism): For every [t.sub.0] [member of] T and [x.sub.0] [member of] D, s(t, [t.sub.0], [x.sub.0], [u.sub.1])= s([t, [t.sub.0], [x.sub.0], [u.sub.2]) for all t [member of] T and [u.sub.1], [u.sub.2] [member of] U satisfying [u.sub.1]([tau])= [u.sub.2]([tau]), [tau] [member of] [[t.sub.0], t].
iv) (Group property): s([t.sub.2], [t.sub.0], [x.sub.0], u) = s([t.sub.2], [t.sub.1], s([t.sub.1], [t.sub.0], [x.sub.0], u), u) for all [t.sub.0], [t.sub.1], [t.sub.2] [member of] T, [t.sub.0] [less than or equal to] [t.sub.1] [less than or equal to] [t.sub.2], [x.sub.0] [member of] D, and u [member of] U.
v) (Readoutmap): There exists y [member of] Y such that y(t) = h(t, s(t, [t.sub.0], [x.sub.0], u), u(t)) for all [x.sub.0] [member of] D, u [member of] U, [t.sub.0] [member of] T, and t [member of] T.
We denote the dynamical system (D, U, U, Y, Y, T, s, h) by G and we refer to the map s(·, [t.sub.0], ·, u) as the flow or trajectory corresponding to [x.sub.0] [member of] D, [t.sub.0] [member of] T, and u [member of] U; and for a given trajectory s(t, [t.sub.0], [x.sub.0], u), t [member of] T, we refer to [t.sub.0] [member of] T as an initial time of G, [x.sub.0] [member of] D as an initial condition of G, and u [member of] U as an input to G. The dynamical system G is isolated if the input space consists of one element only, that is, u(t) = [u.sup.*], and the dynamical system is undisturbed if [u.sup.*] = 0. If G is isolated, then G is isolated from any inputs and the environment is the only input acting on the system. This, for example, would correspond to a conservative mechanical system wherein the only external force acting on the system is gravity. In general, the output of G depends on both the present input of G and the past history of G. Hence, the output of the dynamical system at some time t [member of] T depends on the state s(t, [t.sub.0], [x.sub.0], u) of G, which effectively serves as an information storage (memory) of past history. Furthermore, the determinism axiom ensures that the state, and hence the output, before sometime t [member of] T is not influenced by the values of the output after time t. Thus, future inputs to G do not affect past and present outputs of G. This is simply a statement of causality that holds for all physical systems. The notion of a dynamical system as defined in Definition 1.1 is far too general to develop useful practical deductions for dynamical systems. This notion of a dynamical system is introduced here to develop terminology and to introduce certain key concepts. As we will see in the next chapter, under additional regularity conditions the flow of a dynamical system describing the motion of the system as a function of time can generate a differential equation on the state space, allowing for the development of a large array of mathematical results leading to useful and practical analysis and control synthesis tools.
Determining the rule or dynamic that defines the state of physical and engineering systems at a given future time from a given present state is one of the central problems of science and engineering. Once the flow of a dynamical system describing the motion of the system starting from a given initial state is given, dynamical system theory can be used to describe the behavior of the system states over time for different initial conditions. Throughout the centuriesfrom the great cosmic theorists of ancient Greece to the presentday quest for a unified field theorythe most important dynamical system is our universe. By using abstract mathematical models and attaching them to the physical world, astronomers, mathematicians, and physicists have used abstract thought to deduce something that is true about the natural system of the cosmos.
The quest by scientists, such as Brahe, Kepler, Galileo, Newton, Huygens, Euler, Lagrange, Laplace, and Maxwell, to understand the regularities inherent in the distances of the planets from the sun and their periods and velocities of revolution around the sun led to the science of dynamical systems as a branch of mathematical physics. One of the most basic issues in dynamical system theory that was spawned from the study of mathematical models of our solar system is the stability of dynamical systems. System stability involves the investigation of small deviations from a system's steady state of motion. In particular, a dynamical system is stable if the system is allowed to perform persistent small oscillations about a system equilibrium, or about a state of motion. Among the first investigations of the stability of a given state of motion is by Isaac Newton. In particular, in his Principia Mathematica [335] Newton investigated whether a small perturbation would make a particle moving in a plane around a center of attraction continue to move near the circle, or diverge from it. Newton used his analysis to analyze the motion of the moon orbiting the Earth.
Numerous astronomers and mathematicians who followed made significant contributions to dynamical stability theory in an effort to show that the observed deviations of planets and satellites from fixed elliptical orbits were in agreement with Newton's principle of universal gravitation. Notable contributions include the work of Torricelli, Euler, Lagrange, Laplace, Dirichlet, Liouville, Maxwell, and Routh. The most complete contribution to the stability analysis of dynamical systems was introduced in the late nineteenth century by the Russian mathematician Aleksandr Mikhailovich Lyapunov in his seminal work entitled The General Problem of the Stability of Motion. Lyapunov's direct method states that if a positivedefinite function (now called a Lyapunov function) of the state coordinates of a dynamical system can be constructed for which its time rate of change following small perturbations from the system equilibrium is always negative or zero, then the system equilibrium state is stable. In other words, Lyapunov's method is based on the construction of a Lyapunov function that serves as a generalized norm of the solution of a dynamical system. Its appeal comes from the fact that stability properties of the system solutions are derived directly from the governing dynamical system equations; hence the name, Lyapunov's direct method.
Dynamical system theory grew out of the desire to analyze the mechanics of heavenly bodies and has become one of the most fundamental fields of modern science as it provides the foundation for unlocking many of the mysteries in nature and the universe that involve the evolution of time. Dynamical system theory is used to study ecological systems, geological systems, biological systems, economic systems, neural systems, and physical systems (e.g., mechanics, thermodynamics, fluids, magnetic fields, galaxies, etc.), to cite but a few examples. Dynamical system theory has also played a crucial role in the analysis and control design of numerous complex engineering systems. In particular, advances in feedback control theory have been intricately coupled to progress in dynamical system theory, and conversely, dynamical system theory has been greatly advanced by the numerous challenges posed in the analysis and control design of increasingly complex feedback control systems.
Since most physical and engineering systems are inherently nonlinear, with system nonlinearities arising from numerous sources including, for example, friction (e.g., Coulomb, hysteresis), gyroscopic effects (e.g., rotational motion), kinematic effects (e.g., backlash),input constraints(e.g., saturation, deadband), and geometric constraints, system nonlinearities must be accounted for in system analysis and control design. Nonlinear systems, however, can exhibit a very rich dynamical behavior, such as multiple equilibria, limit cycles, bifurcations, jump resonance phenomena, and chaos, which can make general nonlinear system analysis and control notoriously difficult. Lyapunov's results provide a powerful framework for analyzing the stability of nonlinear dynamical systems. Lyapunov based methods have also been used by control system designers to obtain stabilizing feedback controllers for nonlinear systems. In particular, for smooth feedback, Lyapunovbased methods were inspired by Jurdjevic and Quinn, who give sufficient conditions for smooth stabilization based on the ability to construct a Lyapunov function for the closedloop system. More recently, Artstein introduced the notion of a control Lyapunov function whose existence guarantees a feedback control law which globally stabilizes a nonlinear dynamical system. Even though for certain classes of nonlinear dynamical systems a universal construction of a feedback stabilizer canbe obtained using control Lyapunov Functions, there does not exist a unified procedure for finding a Lyapunov function that will stabilize the closedloop system for general nonlinear systems. In light of this, advances in Lyapunovbased methods have been developed for analysis and control design for numerous classes of nonlinear dynamical systems. As a consequence, Lyapunov's direct method has become one of the cornerstones of systems and control theory.
The main objective of this book is to present necessary mathematical tools for stability analysis and control design of nonlinear systems, with an emphasis on Lyapunovbased methods. The main contents of the book are as follows. In Chapter 2, we provide a systematic development of nonlinear ordinary differential equations, which is central to the study of nonlinear dynamical system theory. Specifically, we develop qualitative solutions properties, existence of solutions, uniqueness of solutions, continuity of solutions, and continuous dependence of solutions on system initial conditions for nonlinear dynamical systems.
In Chapter 3, we develop stability theory for nonlinear dynamical systems. Specifically, Lyapunov stability theorems are developed for timeinvariant nonlinear dynamical systems. Furthermore, invariant set stability theorems, converse Lyapunov theorems, and Lyapunov instability theorems are also considered. Finally, we present several systematic approaches for constructing Lyapunov functions as well as stability of linear systems and Lyapunov's linearization method. Chapter 4 provides an advanced treatment of stability theory including partial stability, stability theory for timevarying systems, Lagrange stability, boundedness, ultimate boundedness, inputtostate stability, finitetime stability, semistability, and stability theorems via vector Lyapunov functions. In addition, Lyapunov and asymptotic stability of sets as well as stability of periodic orbits are also systematically addressed. In particular, local and global stability theorems are given using lower semicontinuous Lyapunov functions. Furthermore, generalized invariant set theorems are derived wherein system trajectories converge to a union of largest invariant sets contained on the boundary of the intersections over finite intervals of the closure of generalized Lyapunov level surfaces. These results provide transparent generalizations to standard Lyapunov and invariant set theorems.
(Continues...)
Table of Contents
Conventions and Notation xv
Preface xxi
Chapter 1. Introduction 1
Chapter 2. Dynamical Systems and Differential Equations 9
Chapter 3. Stability Theory for Nonlinear Dynamical Systems 135
Chapter 4. Advanced Stability Theory 207
Chapter 5. Dissipativity Theory for Nonlinear Dynamical Systems 325
Chapter 6. Stability and Optimality of Feedback Dynamical Systems 411
Chapter 7. InputOutput Stability and Dissipativity 471
Chapter 8. Optimal Nonlinear Feedback Control 511
Chapter 9. Inverse Optimal Control and Integrator Backstepping 557
Chapter 10. Disturbance Rejection Control for Nonlinear Dynamical Systems 603
Chapter 11. Robust Control for Nonlinear Uncertain Systems 649
Chapter 12. Structured Parametric Uncertainty and ParameterDependent Lyapunov Functions 719
Chapter 13. Stability and Dissipativity Theory for DiscreteTime Nonlinear Dynamical Systems 763
Chapter 14. DiscreteTime Optimal Nonlinear Feedback Control 845
Bibliography 901
Index 939