**Uh-oh, it looks like your Internet Explorer is out of date.**

For a better shopping experience, please upgrade now.

# Optima for Animals / Edition 1

Optima for Animals / Edition 1 available in Paperback

- ISBN-10:
- 0691027986
- ISBN-13:
- 9780691027982
- Pub. Date:
- 11/11/1996
- Publisher:
- Princeton University Press

## Overview

Optimization theory is designed to find the best ways of doing things. The structures of animals, their movements, their behavior, and their life histories have all been shaped by the optimizing processes of evolution or of learning by trial and error. In this revised edition of R. McNeill Alexander's widely acclaimed *Optima for Animals*, we see how extraordinarily diverse branches of biology are illuminated by the powerful methods of optimization theory.

What is the best strength for a bone? Too weak a bone will probably break but an excessively stout one will be cumbersome. At what speed should humans change from walking to running? Should a bird take only big juicy worms or should it eat every worm it finds, and do birds make the best choices? Why do the males of some species of fishes and the females of others look after the young, while the young of others are looked after by both parents or neither? Is it possible that all these policies can be optimal, in different circumstances? This book shows how these and many other questions can be answered. The mathematics involved is explained very simply, with biology students in mind, but the book is not just for them. It is also for professionals, ranging from teachers to researchers.

## Product Details

ISBN-13: | 9780691027982 |
---|---|

Publisher: | Princeton University Press |

Publication date: | 11/11/1996 |

Edition description: | Revised |

Pages: | 176 |

Product dimensions: | 7.75(w) x 10.00(h) x 0.54(d) |

## Read an Excerpt

#### Optima for Animals

**By R. McNeill Alexander**

** Princeton University Press **

**Copyright © 1996**

**Princeton University Press**

All right reserved.

All right reserved.

**ISBN: 0-691-02798-6**

#### Chapter One

Introduction **1.1 Optimization and evolution**

Evolution is directed by natural selection. Those sets of genes which enable animals to survive and reproduce best are most likely to be transmitted to subsequent generations. The ability to survive and reproduce can be measured by the quantity known as *'fitness'*. If a particular set of genes is possessed by [n.sub.1] members of the current generation and [n.sub.2] members of the next generation (the counts being made at the same stage of the life history in each generation), the fitness of that set of genes is [n.sub.2]/[n.sub.1.]

Evolution favours genotypes of high fitness but it does not generally increase fitness in the species as a whole. The reason is that fitness depends on the competitors which have to be faced, as well as on other features of the environment. A genotype which has high fitness now may have much lower fitness at some time in the future when a new, improved genotype has become common. In due course it will probably be eliminated, as evolution proceeds.

Though fitness itself may not increase, other qualities which affect fitness tend to improve in the course of evolution. For instance, natural selection generally favours characters that make animals use food energy more efficiently, enabling them to survive better when food is scarce and to divert more energy to reproduction when food is more plentiful. Natural selection generally favours characters which enable animals to collect food faster, so that they can either collect more food or devote more time to other activities such as reproduction. Natural selection generally favours characters that enable animals to hide or escape from predators more effectively. It favours characters that in these and other ways fit the animal best for life in its present environment.

A shopper looking for the best buy chooses the cheapest article among several of equal quality, or the best among several of equal price. Similarly, natural selection favours sets of genes which minimize costs or maximize benefits. The costs can often be identified as mortality or energy losses, the benefits as fecundity or energy gains.

Optimization is the process of minimizing costs or maximizing benefits, or obtaining the best possible compromise between the two. Evolution by natural selection is a process of optimization. Learning can also be an optimizing process: the animal discovers the most effective technique for some purpose by trial and error. Subsequent chapters show some of the ways in which optima are approached, in the structure and lives of animals. They are therefore concerned largely with maxima and minima. The rest of this chapter is about maxima and minima and ways in which they can be found. It introduces, as simply as possible, some of the mathematical concepts and techniques that are applied to zoological problems in later chapters.

**1.2 Maxima and minima**

In Fig. 1-1(a), y has a maximum value when x = [x.sub.max]. In Fig. 1-1 (b), y has a minimum value when x = [x.sub.min]. It is easy enough to see where the maximum and minimum are when the graphs are plotted but it will be convenient to have a method for finding the maxima and minima of algebraic expressions, without drawing graphs. The most generally useful method is supplied by the branch of mathematics called differential calculus. The rest of this section can be skipped by readers who already know a little calculus.

The method of finding maxima and minima depends on the study of gradients. Figure 1-1(c) shows a straight line passing through the points ([*x*.sub.1], [*y*.sub.1] and ([*x*.sub.2],[*y*.sub.2]. The gradient (slope) of this line is ([*y*.sub.2] - [*y*.sub.1]/ ([*x*.sub.2] - [*x*.sub.1]). In this case [*y*.sub.2] > [*y*.sub.1] and [*x*.sub.2] > [*x*.sub.1] so the gradient is positive. In the case illustrated in Fig. 1-1(d), however, [*y*.sub.2] < [*y*.sub.1] and the gradient is negative.

A straight line has the same gradient all along its length. A curve can be thought of as a chain of very short straight lines of different gradients, joined end to end, so different parts of a curve have different gradients. In Fig. 1-1(a) the gradient is positive before the maximum (i.e. at lower values of *x*), zero at the maximum and negative after the maximum. Similarly in Fig. 1-1(b) the gradient is negative before the minimum, zero at it and positive after it. At a maximum the gradient is zero and decreasing, but at a minimum it is zero and increasing.

An example will show how gradients can be calculated. Figure 1-2(a) is a graph of *y* = [*x*.sup2]. It shows that *y* has just one minimum and no maximum, and that the minimum occurs when *x* = 0. Consider two points very close together on the graph, (*x, y*) and (*x* + [delta] *x, y* +[delta] *y*). The symbol [delta] *x* means a small increase in *x* and [delta] *y* means the corresponding increase in *y*. The gradient of a straight line joining the two points is [delta] *y*/[delta] *x*. The smaller [delta] *x* is, the more nearly is [delta] *y*/[delta] *x* equal to the gradient of the curve at (*x, y*), which is represented by the symbol d*y*/d*x*. If [delta] *x* is infinitesimally small, [delta] *y*/[delta] *x* = d*y*/d*x*. Note that d*y*/d*x* should be read as a single symbol: it is not a quantity d*y* divided by a quantity d*x*, still less can it be interpreted as (d x *y*)/ (d x *x*).

In this particular case *y* = [*x*.sup.2] and also *y* + [delta] *y* = [(*x* + [delta] *x*).sup.2] = [*x*.sup.2] + 2*x*.[delta] *x* + [([delta] *x*).sup.2]

Subtracting the first equation from the second

[delta] *y* = 2*x*.[delta] *x* [([delta] *x*).sup.2] [delta] *y*/[delta] *x* = 2*x* + [delta] *x*

If [delta] *x* is infinitesimally small, the term [delta] *x* on the right hand side of the equation can be neglected, and [delta] *y*/[delta] *x* = d*y*/d*x*, so

d*y*/d*x* = 2*x*

It can be shown by similar arguments that if k is a constant

when *y* = *kx* d*y*/d*x* = *k* (1.1) when *y* = [*kx*.sup.2] d*y*/d*x* = 2*kx* (1.2) when *y* = [*kx*.sup.3] d*y*/d*x* = [3*kx*.sup.2] (1.3) when *y* = *k/x* d*y*/d*x* = -*[k/x.sup.2]* (1.4) and when *y* = *k* d*y*/d*x* = 0 (1.5)

All these cases are summarized by the general statement

when y = [kx.sup.n] dy/dx = [nkx.sup.n-1] (1.6)

Similar arguments can also be used to obtain expressions for the gradients of other functions of x, for instance

when *y* = *k*.[log.sub.e]*x* d*y*/d*x* = *k/x* (1.7) and when *y* = exp (*kx*) d*y*/d*x* = *k*.exp(*kx*) = *ky* (1.8)

Notice particularly that in this last case, d*y*/d*x* is proportional to *y*. If *x* represented time this case would represent exponential growth, in which rate of growth is proportional to present size.

Other examples can be found in books on calculus. The process of obtaining d*y*/d*x* from *y* is called differentiation.

More complicated expressions are often easy to differentiate. Let *y* and *z* be two different functions of *x*. Then

if *u* = *y* + *z* d*u*/d*x* = d*y*/d*x* + d*z*/d*x* (1.9) if *u* = *yz* d*u*/d*x* = *z*. d*y*/d*x* + *y*. d*z*/d*x* (1.10) and if *u* = *y/z* d*u*/d*x* = (*z*. d*y*/d*x* - *y*. d*z*/d*x*)/[*z*.sup.2] (1.11)

Also, if *u* is a function of *y* which in turn is a function of *x*

d*u*/d*x* = d*u*/d*y* .d*y*/d*x* (1.12)

For instance to differentiate u = ([1 + *x*.sup.2])1/2, write y = 1 + [*x*.sup.2]. Then d*u*/d*y* = d (*y* 1/2)/d*y* = 1/2[*y*.sup.-1/2] and d*y*/d*x* = 2*x*, so

d*u*/d*x* = [*xy*.sup.-1/2] = *x*/[(1 = [*x*.sup.2]).sup.1/2]

Equations *(1.1)* to *(1.12)* enable us to discover values of *x* for which d*y*/d*x* is zero, in particular cases. Thus they help us to find maxima and minima. To distinguish between maxima and minima we also need to know whether d*y*/d*x* is increasing or decreasing. In other words, we need to know whether the gradient of a graph of d*y*/d*x* against *x* is positive (as in Fig. 1-2b) or negative, at the appropriate value of *x*. This can be discovered by differentiating again.

Since the symbol d*y*/d*x* was used for the gradient of a graph of *y* against *x*, it would be logical to use d (d*y*/d*x*)/d*x* for the gradient of a graph of d*y*/d*x* against *x*. It is customary to write instead, for brevity, [d.sup.2]*y*/[d*x*.sup.2].

At a minimum d*y*/d*x* = 0 and [d.sup.2]*y*/[d*x*.sup.2] is positive. (1.13)

At a maximum d*y*/d*x* = 0 and [d.sup.2]*y*/[d*x*.sup.2] is negative. (1.14)

These rules will be applied to the case *y* = [*x*.sup.2]. Differentiation gives d*y*/d*x* = 2*x*, which is zero when *x* = 0. A second differentiation (using equation 1.1) gives [d.sup.2]*y*/[d*x*.sup.2] = 2, which is positive when *x* = 0 (and for all other values of *x*). Hence *y* has a minimum when *x* = 0.

Most of the mathematical functions discussed in this book have just one maximum and no minima, or just one minimum and no maxima. Many other functions have a maximum and a minimum, or several of each. In such cases the rules *(1.13)* and *(1.14)* are generally adequate to find all the maxima and minima.

**1.3 Optima for aircraft**

The method which has just been explained will be illustrated by a simple example. It is about aeroplanes, not animals, but the same equation will be applied to birds in chapter 3. The power *P* required to propel an aeroplane in level flight at constant velocity *u* is given by the equation

P = A[*u*.sup.3] + B[L.sup.2]/*u* (1.15)

where *A* and *B* are constants for the particular aircraft and *L* is the lift, the upward aerodynamic force which supports the weight of the aircraft. The lift is produced by the wings deflecting air downwards, and the term B[L.sup.2]/*u* represents the power required for this purpose. It is called induced power. The term [*Au.sup.3*] represents power which would be needed to drive the aircraft through the air, even if no lift were required. It is called profile power. As the speed *u* increases, profile power increases but induced power decreases, and there is a particular speed at which the total power *P* is a minimum. It can be found by differential calculus.

Differentiate equation *(1.15)*, using equations *(1.3)* and *(1.4.)*

dP/d*u* = 3A[*u*.sup.2] - B[L.sup.2]/[*u*.sup.2] (1.16)

which is zero when *u* has the value [u.sub.min] P given by

3A[u.sup.2]min P = B[L.sup.2]/[u.sup.2]min P [u.sup.min] P = [(B[L.sup.2]/ 3A).sup.1/4] (1.17)

Differentiate equation (1.16)

[d.sup.2]P/d[*u*.sup.2] = 6A*u* + 2B[L.sup.2]/[*u*.sup.3]

which is positive for all positive values of *u*. This confirms that the power is a minimum, at the speed given by equation (1.17).

This is the speed at which least power is needed to propel the aircraft, but it may not be the optimum speed. If the aircraft flew faster it would need more power but it would reach its destination sooner and might use less fuel for the journey. The energy *E* required to travel unit distance is given by

E = P/*u* = A[*u*.sup.2] + B[L.sup.2]/[*u*.sup.2]

Differentiation gives

d*E*/d*u* = 2A*u* - 2B[L.sup.2]/[*u*.sup.3]

and *E* has its minimum value at the speed [u.sup.min E] given by

[u.sup.min]E = [(BL.sup.2]/A)1/4 (1.18)

This is 32% faster than the speed [u.sup.min P]. A pilot wishing to remain airborne for as long as possible without re-fuelling should fly at [u.sup.min] P but a pilot wishing to fly as far as possible without re-fuelling should fly at [u.sup.min] E.

**1.4 Fitting lines**

Figure 1-3(a) is a graph of a quantity *y* that depends on two variables, *a* and *b*. Since *a, b* and *y* all have to be represented, this has to be a three-dimensional graph. The third dimension is represented by the contours which give values of *y*. These contours show that y has a minimum value when *a* = 2 and *b* = 3.

Section 1.2 explained how maxima and minima can be found for functions of one variable. The rule for functions of two variables is very similar. At a maximum or minimum

[delta] *y* /[delta] *a* = 0 and [delta] *y*/[delta] *b* = 0 (1.19)

Remember that d*y*/d*x* means the gradient of a graph of *y* against *x*. The symbol [partial derivative]*y*/[partial derivative]*a* means the gradient of a graph of *y* against *a*, with *b* (and any other variables) held constant. Similarly [partial derivative]y/[partial derivative]b means the gradient of a graph of *y* against *b* with all other variables held constant. A point at which equations *(1.19)* both hold may be a maximum, or a minimum, or neither. The rules for deciding which are more complicated than conditions *(1.13)* and *(1.14)* and are explained in books about calculus.

The usefulness of conditions *(1.19)* will be illustrated by explaining a standard statistical procedure. Experiments often lead to graphs like Fig. 1-3(b). The points are scattered (due perhaps to experimental errors) but suggest a sloping line. *Continues...*

Excerpted fromOptima for AnimalsbyR. McNeill AlexanderCopyright © 1996 by Princeton University Press. Excerpted by permission.

All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

## Table of Contents

Preface

1 Introduction 1

2 Optimum structures 17

3 Optimum movements 45

4 Optimum behaviour 65

5 Optimum life-styles 105

6 Dangers and difficulties 141

7 Mathematical summary 151

References 161

Index 167