 Shopping Bag ( 0 items )

All (6) from $30.03

New (6) from $30.03
The physics of extended systems is a topic of great interest for the experimentalist and the theoretician alike. There exists a large literature on this subject in which solutions, bifurcations, fronts, and the dynamical stability of these objects are discussed. To the uninitiated reader, the theoretical methods that lead to the various results often seem somewhat ad hoc, and it is not clear how to generalize them to the nextthat is, not yet solvedproblem. In an introduction to the subject of instabilities in spatially infinite systems, Pierre Collet and JeanPierre Eckmann aim to give a systematic account of these methods, and to work out the relevant features that make them operational. The book examines in detail a number of model equations from physics. The mathematical developments of the subject are based on bifurcation theory and on the theory of invariant manifolds. These are combined to give a coherent description of several problems in which instabilities occur, notably the Eckhaus instability and the formation of fronts in the SwiftHohenberg equation. These phenomena can appear only in infinite systems, and this book breaks new ground as a systematic account of the mathematics connected with infinite space domains.
Originally published in 1990.
The Princeton Legacy Library uses the latest printondemand technology to again make available previously outofprint books from the distinguished backlist of Princeton University Press. These paperback editions preserve the original texts of these important books while presenting them in durable paperback editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.
SETTING THE STAGE
1. Physical Equations
Partial differential equations describing the time evolution of a physical system are very common. They are derived either directly from basic physical principles or as some reduced descriptions of microscopical evolutions valid in some limit. We shall not discuss the question of establishing these macroscopic equations, this is a very difficult problem which is not completely understood.
To fix the ideas, we mention some of the most important macroscopic equations with the idea that the theories discussed below will apply without too much changes to all these equations. One of the best known equations is the NavierStokes equation which describes the time evolution of the velocity field v(x, t) : R3 × R >R3 of a fluid. We denote by × an element of R3 and by xi, i = 1,2,3 its components. It is given by
[partial derivative]t(x, t) = vΔx(x, t)  (v(x, t) · [nabla]x)v(x, t) + f(x, t)  [nabla]xp(x, t),
with the incompressibility constraint
[nabla]x · v(x, t) = 0, (1.1)
and where p is the pressure field, f the (bulk) force field, and v the viscosity. The symbol · denotes the scalar product in R3. In finite volume one should add, of course, boundary conditions.
Other physical phenomena can take place in the fluid which are not described by the NavierStokes equation. For example, if the heat effects cannot be neglected one should couple the NavierStokes equation to the heat equation. Under some physical assumptions one gets the Boussinesq equation
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (1.2)
Here, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is a unit vector pointing in the vertical direction. If the fluid is composed of charged particles one should add a coupling with the equation describing the evolution of an electromagnetic field and one obtains the socalled magnetohydrodynamical system. We refer the reader to [Ch] for more information on these equations.
Similar equations also occur in the study of chemical systems. Assume a container is filled with N different chemical species. Each of them is described by its field of concentration ρi(x, t) : R3 × R >R (i = 1, ..., N). The chemical evolution equations are given by
[partial derivative]tρi(x, t) + [nabla]x · qi(x, t) = Fi(x, t),
where qi : R3 × R >R3 is the flux and Fi is the source term for the species number i. The flux is most often given by
qi(x, t) = Di[nabla]xρi(x, t) + ciρi(x, t),
where the first term is a diffusion term (with diffusion constant Di) and the second one is a transport term (at velocity ci [member of] R3). The source term Fi [member of] R originates in the chemical reactions among the various species. It is usually a nonlinear function of the various concentrations.
Similar examples also occur in biology, combustion and the theory of flames, elasticity, metallurgy etc.
If these equations are considered in finite space volumes, one should add to them boundary conditions. These equations are also often considered in infinite volume, and we will later explain why. It turns out that these two problems are of a very different nature. The reason is that problems in finite volume can be reduced (at least formally) to finite systems of coupled differential equations, while this is not possible for infinite volume. In the latter case, the space variables can introduce new nontrivial effects, but we also get simplified formulations from the translation invariance.
2. Attractors
The time evolution of a physical system with finitely many degrees of freedom is often described by a system of differential equations of the form
dx/dt = X(x)
where the vector x(t) represents the state of the system at time t. The set of all possible states is called the phase space of the system and is often a vector space Rd. The vector valued function X(x) is also called a vector field. The class of equations of the above type is called the class of dynamical systems. There are, of course, extensions of this formulation to vector fields on manifolds, or to partial differential equations but we shall first discuss the more elementary (but still very frequent and important) situation of differential equations.
We shall be interested mostly in the large time behavior of the evolution, and in particular in properties of this large time behavior which do not depend too much on the initial condition. While some transient properties are interesting and important in their own right, most questions in pure or applied physics deal with the large time behavior.
There are numerous examples of time evolution equations; to be specific, we shall illustrate some basic notions in the wellknown case of Classical Mechanics. The evolution equations are Newton's equations and we shall discuss them in the Hamiltonian formalism. The phase space is even dimensional (of dimension 2d) with a state represented by two d dimensional vectors q and p (respectively position and momentum). The evolution equations are constructed using a real valued function H = H(p, q), called the Hamiltonian. They are given by
dq/dt = [partial derivative]H/[partial derivative]p, dp/dt = [partial derivative]H/[partial derivative]q.
A wellknown and simple example is the harmonic oscillator with Hamiltonian given by H = (p2 + ω2q2)/2 (the phase space dimension is 2). The quantity ω is constant. The evolution equations are easy to integrate, but since the value of the Hamiltonian is conserved during the time evolution, we conclude at once that the trajectories in the two dimensional phase space are ellipses, and the time evolution is simply going along these ellipses. This is the general situation for integrable Hamiltonian systems where the time evolution is taking place on invariant tori, cf. [Ar].
A slightly more interesting case is obtained if we add a linear friction term to the above time evolution. The system is no longer Hamiltonian, and not even conservative (energy is lost by friction). If the friction coefficient is denoted by the positive number η, the evolution equations are given by
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].
In Section 7 and in Section 19, we shall present in a more general context the method of replacing higher order differential equations by a first order system of equations. If we compute the time derivative of the Hamiltonian H = (p2 + ω2q2)/2 for this evolution, we get
dH/dt = ηp2.
This implies the wellknown fact that for any initial condition, the trajectory converges to the origin when time goes to infinity. We conclude that the origin, which is a stationary solution of the dynamics, describes the large time evolution of any initial condition.
This simple example also illustrates a common feature of dissipative dynamical systems which is the contraction of phase space. This implies that the large time dynamics is taking place on a smaller part of phase space. Intuitively, if we have a dissipative system, it will dissipate all its initial kinetic energy, and the asymptotic time evolution will be stationary as in the above example. In order to obtain a nontrivial asymptotic time evolution, we should therefore compensate the loss of energy by dissipation. This can be done by an external forcing. In what follows, we shall be concerned mostly with such dissipative systems. We want to emphasize, however, that many ideas and results can be adapted to the study of conservative systems.
Coming back to our simple example of the harmonic oscillator, we can now look at a damped oscillator with a harmonic forcing of period 2π/α and amplitude A. The evolution equation is now given by
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].
The (wellknown) solution of this system is given by
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],
where "tt" means transitory terms which vanish as t > ∞. We conclude immediately from these formulas that for any initial condition, the asymptotic time evolution is the same and is taking place on the same ellipse. By compensating the losses of energy due to the friction term we have obtained a nontrivial asymptotic dynamics. Note also that this asymptotic dynamics is stable in the sense that if we start with an initial condition which is near the ellipse, the trajectory will spiral towards the ellipse (in this simple example this is in fact true for any initial condition). The ellipse is sometimes called a dynamical stationary state or dynamical equilibrium. Note also that it depends on the parameters in the problem. If we change for example the friction coefficient, η, we get a different ellipse.
Summarizing the above examples, we see that the time evolution of a dissipative system contracts phase space. If the losses are not compensated, the evolution comes to rest. If they are compensated, a nontrivial dynamical equilibrium can appear in some part of phase space. The interesting dynamical equilibria are those which are locally attracting, that is, nearby initial conditions will give rise to trajectories which stay nearby and approach for large time the dynamical equilibrium (stability).
We can now formulate a mathematical definition for this notion of stable dynamical equilibrium.
Definition 2.1.A set Ω is called an attractor for a vector field X if it is invariant and if there is a neighborhood V of Ω such that for every sufficiently small neighborhood U of Ω, and for large enough t, the Row St generated by X satisfies St(V) [subset] U.
Remark. One usually considers irreducible attractors. These are attractors containing a dense orbit.
We also want to emphasize that in the phase space of a dynamical system, several attractors can coexist in different regions. For a given attractor, the set of initial conditions which converge for large time to the attractor is called the basin (of attraction) of the attractor. In phase space, there can be other invariant sets, such as the boundary between basins of attraction, or repellers. A simple example with several attractors can be constructed as follows. Consider the Hamiltonian dynamical system in R2 associated with the Hamiltonian H = p2/2 + V(q), where the potential is given by V(q) = q2/2 + q4/4. If we add a friction term as before, it is easy to verify that the system has two attractors, (q = ±1, p = 0), and a stationary solution at the origin, (q = 0, p = 0) which is not an attractor. Most initial conditions near the origin flow toward one or the other attractor (we shall give later a precise discussion of this local behavior).
The attractors depend, of course, on the dynamical system. In the above example with friction, consider the one parameter family of potentials Vλ = λq2 /2 + q4/4. It is easy to show that for λ < 0 there is only one attractor at the origin. For λ > 0, the origin is no longer an attractor, but a repeller, and in addition there are two other attractors at (q = ± √λ, p = 0).
3. Finite and Infinite Space Systems
In this section we explain the main difference between systems in finite volume and systems in infinite volume.
We shall first discuss the ideas for the concrete example of the RayleighBénard experiment. This experiment is realized as follows. One considers a cubic container of size l which contains a fluid (in general water, oil, or regular liquid helium).
The vertical edges of the container are thermally insulated while the horizontal plates are good thermal conductors. One then imposes a temperature difference between the two horizontal plates, the top one being at a lower temperature. This experimental setup is described by the system of Boussinesq equations which gives the time evolution of the field of temperature T(x, t) with the bottom plate at temperature 0 and of the velocity field of the fluid v(x, t). In contrast to equation (1.2), we have subtracted from T a linear function such that the new T equals 0 at the bottom and 0 at the top. The evolution equation is given explicitly by
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],
where p is the pressure field, P is the Prandtl number, R is the Rayleigh number (proportional to the temperature difference of top and bottom), [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the vertical component of the velocity field and [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] is the unit vector in the vertical direction. We use the notation x [member of] R3 and assume the components of x are x1, x2, x3. One should also add boundary conditions on the sides.
Since we are dealing with a cubic container we can decompose the temperature and velocity fields in Fourier series in the space variables. Taking the Fourier transform of the Boussinesq system we obtain an infinite system of coupled differential equations for the time evolution of the Fourier components. This system is a priori very complicated but we explain now why it is at least formally of finite dimensional nature. Consider the equation for the time evolution of a Fourier component of the temperature field. We consider periodic boundary conditions to make the argument simpler. By Fourier components, we mean the decomposition
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],
where the xi are the coordinates in R3. The time evolution equations are then of the form
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],
where Tm,n,p(v, T) is some complicated nonlinear expression. For large values of m2 + n2 + p2 the first term, which is dissipative, dominates the nonlinear part. In other words, the Fourier modes of T with large indices are strongly damped and their dynamics is essentially driven by the Fourier modes of T with small indices.
To illustrate this point more precisely consider the much simpler differential equation for the unknown function x : R >R given by
dx(t)/dt = Ax(t) + f(t), (3.1)
where f : R >R is some given bounded function of time and A is a large positive number. The solution is
[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].
Excerpted from Instabilities and Fronts in Extended Systems by Pierre Collet, JeanPierre Eckmann. Copyright © 1990 Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by DialABook Inc. solely for the personal use of visitors to this web site.
Overview
The physics of extended systems is a topic of great interest for the experimentalist and the theoretician alike. There exists a large literature on this subject in which solutions, bifurcations, fronts, and the dynamical stability of these objects are discussed. To the uninitiated reader, the theoretical methods that lead to the various results often seem somewhat ad hoc, and it is not clear how to generalize them to the nextthat is, not yet solvedproblem. In an introduction to the subject of instabilities in ...