Foundations of Dynamic Economic Analysis: Optimal Control Theory and Applications / Edition 1

Foundations of Dynamic Economic Analysis: Optimal Control Theory and Applications / Edition 1

by Michael R. Caputo
ISBN-10:
0521603684
ISBN-13:
9780521603683
Pub. Date:
01/17/2005
Publisher:
Cambridge University Press
ISBN-10:
0521603684
ISBN-13:
9780521603683
Pub. Date:
01/17/2005
Publisher:
Cambridge University Press
Foundations of Dynamic Economic Analysis: Optimal Control Theory and Applications / Edition 1

Foundations of Dynamic Economic Analysis: Optimal Control Theory and Applications / Edition 1

by Michael R. Caputo

Paperback

$88.99
Current price is , Original price is $88.99. You
$88.99 
  • SHIP THIS ITEM
    In stock. Ships in 1-2 days.
  • PICK UP IN STORE

    Your local store may have stock of this item.

  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.


Overview

Presenting a thorough introductory exposition of optimal control theory, this work differs from the existing textbooks on the subject due to its emphasis on the economic interpretation of the mathematics and the qualitative properties of the solutions. Moreover, it is a modern exposition of optimal control theory in that it presents numerous complementary methods. It is aimed at first-year and second-year PhD students in economics, agricultural and resource economics, operations research, management science, and applied mathematics.

Product Details

ISBN-13: 9780521603683
Publisher: Cambridge University Press
Publication date: 01/17/2005
Edition description: New Edition
Pages: 592
Product dimensions: 6.06(w) x 9.25(h) x 1.22(d)

About the Author

Michael R. Caputo is Professor of Economics in the College of Business Administration, University of Central Florida, in Orlando. He was awarded his PhD in economics from the University of Washington, where he received the Henry C. Beuchel memorial award for distinguished undergraduate teaching by the Department of Economics in 1986. Professor Caputo then taught in the Department of Agriculture and Resource Economics at the University of California, Davis from 1987 to 2003. In 1998 he was inducted into the volume 'Who's Who Among America's Teachers'. Professor Caputo's research has appeared in numerous peer-reviewed journals, including the Review of Economic Studies, the Journal of Economic Theory, International Economic Review, Review of Economics and Statistics, the Journal of Economic Dynamics and Control, the Journal of Mathematical Economics, the Journal of Optimization Theory and Applications, the Journal of Economics, and the American Journal of Agricultural Economics.

Read an Excerpt

Foundations of Dynamic Economic Analysis
Cambridge University Press
0521842727 - Foundations of Dynamic Economic Analysis - Optimal Control Theory and Applications - by Michael R. Caputo
Excerpt



ONE

Essential Elements of Continuous Time Dynamic Optimization


In order to motivate the following introductory material on dynamic optimization problems, it will be advantageous to draw heavily on your knowledge of static optimization theory. To that end, we begin by recalling the definition of the prototype unconstrained static optimization problem, namely,
Display matter not available in HTML version

where x ∈ ℜN is a vector of decision or choice variables, α ∈ ℜA is a vector of parameters, f(·) is the twice continuously differentiable objective function, that is, f(·) ∈ C(2), and φ (·) is the indirect or maximized objective function. This is terminology you should be more or less familiar with from prior courses.

Because we will deal repeatedly with vectors and matrices as well as the derivatives of scalar- and vector-valued functions in this book, we pause momentarily to establish three notational conventions that we shall adhere to throughout. First, all vectors are treated as column vectors. To denote a row vector, we therefore employ the transpose operator, denoted by the symbol ′. Thus x ∈ ℜN is taken to be an N-element column vector, whereas x′ is an N-element row vector. Note also that vectors appear in boldface type.

Second, if g(·) : ℜN → ℜM is a C(1) vector-valued function, thereby implying that g(·) ⃞ (g1(·), g2(·), . . . , gM(·))′, then at any x ∈ ℜN, we define the M × N Jacobian matrix of g(·) by
Display matter not available in HTML version

where gmxn (x) is the partial derivative of gM(·) with respect to xn evaluated at the point (x). It is also the element in the mth row and nth column of gx (x). This definition implies that if M = 1, so that g(·) : ℜN → ℜ is now a scalar-valued function, then gx (x) = (gx1 (x), gx2 (x), . . . , gxN (x)) is a row vector, or equivalently, a 1 × N matrix. This means that the derivative of a scalar-valued function with respect to a column vector is a row vector. As an extension of this notation, if we now assume that g(·) : ℜN + A → ℜM is a C(1) function whose arguments are the vectors x ∈ ℜN and α ∈ ℜA, then gx (x; α) is the M × N Jacobian matrix given in Eq. (2), whereas gα (x; α) is an M × A Jacobian matrix defined similarly.

Third, if g(·) : ℜN + A → ℜ is a C(2) scalar-valued function whose arguments are the vectors x ∈ ℜN and α ∈ ℜA, then there are four Hessian matrices that can be defined based on g(·) because of the two different sets of variables that it depends on, scilicet,
Display matter not available in HTML version

and the A × N matrix gαx (x; α), the computation of which we leave for a mental exercise. We remark in passing that there is a matrix version of the invariance of the second-order partial derivatives to the order of differentiation, and this too is left for a mental exercise.

With the notational matters settled, let's now return to the unconstrained static optimization problem defined in Eq. (1). Assume that an optimal solution exists to problem (1), say x = x∗(α). Typically, we would find the solution by simultaneously solving the first-order necessary conditions (FONCs) of problem (1), which are given by
Display matter not available in HTML version

in vector notation, where 0N is the null (column) vector in ℜN, or by
Display matter not available in HTML version

in index notation. Unless the objective function f(·) happened to be of a particularly simple functional form, an explicit solution for x = x∗(α) is more often than not rare. If, however, we assume that the second-order sufficient condition (SOSC) holds at x = x∗(α), that is,
Display matter not available in HTML version

or
Display matter not available in HTML version

then we can apply the implicit function theorem to the FONCs to solve for the optimal choice vector x = x∗(α) in principle. To see why this is so, recall that the Jacobian matrix of the FONCs is given by the N × N matrix fxx (x; α), which is identical to the Hessian matrix of the objective function. Moreover, the SOSC implies that the Hessian determinant of the FONCs is nonvanishing at the optimal solution, that is, that | fxx (x; α) | ≠ 0 when evaluated at x = x∗(α). Because this is equivalent to the nonvanishing of the Jacobian determinant of the FONCs, the implicit function theorem may be applied to the FONCs to solve, in principle, for the optimal choice vector x = x∗(α). Again, this line of reasoning should be familiar to you from prior courses in microeconomic theory.

In light of the dynamic problems that will occupy us in this book, the most important aspect of the above discussion concerning problem (1) is that for a given value of the parameters, say, α = α°, we typically solve for a particular value for each of the decision or choice variables, say, xn° = xn∗ (α°), n = 1,2, . . . ,N. We do this by solving a set of algebraic equations, that is to say, the FONCs. If the parameter vector is different, say, α = α1, then usually a different value of the choice variables is implied, say, xn1 = xn∗ (α1), n = 1,2, . . . , N. By their very nature, therefore, static optimization problems ask the decision maker to pick out a particular value of the decision variables given the parameters of the problem. For A = N = 1, Figure 1.1 depicts this situation graphically.

To add an economic spin to all of this, recall the prototype profit-maximizing model of the price-taking firm:
Display matter not available in HTML version

where F(·) is the twice continuously differentiable production function, x1 and x2 are the decision variables representing the inputs of the firm, w1 and w2 are the market prices of the inputs, p is the output price, and π∗(·) is the indirect profit function. Given a particular set of prices, say, (p, w1, w2 ) = (p°, w1°, w2°), the firm seeks to determine the values of the inputs that maximize its profit, say, xn° = xn∗ (p°, w1°, w2°), n = 1,2. If such optimal values exist, then they are found, in principle, by simultaneously solving the FONCs, given by
Display matter not available in HTML version

Figure 1.1
Image not available in HTML version

Note that these are algebraic equations that in general must be solved simultaneously for the optimal values of the inputs. To repeat, the most important point to take away from this discussion is that the choice of the optimal input combination is made just one time: there is no planning for the future, nor are there future decisions to be made in this problem. This is exactly as the static framework of the problem dictates.

In contrast, given the parameters of a dynamic optimization problem, its solution is a sequence of optimal decisions in discrete time, or a time path or curve of optimal decisions in continuous time, over the relevant planning period or planning horizon, not just one particular value for each of the decision variables like the solution to a static optimization problem. The optimal time path or curve is, by definition, the one that optimizes some type of objective function. The type of objective function in dynamic problems, however, is quite different from that in static problems. Because the solution to a continuous time dynamic optimization problem is a time path or curve, it appears to be reasonable and even natural for the objective function to place a value on the decision variables at each point in time of the planning horizon and to add up the resulting values over the relevant planning period, akin to what is done when one computes the present value of some stream of net benefits that are received over time.

To better motivate the form of the objective function in dynamic problems, consider Figure 1.2. Here three typical time paths of a function x(·), or curves x(t) associated with the function x(·), are displayed along with the resulting value of the objective function associated with each time path J[x(·)], the latter of which we refer to as a path value. We have denoted the independent variable by the letter t and refer to it as time, as this is the natural interpretation of the independent variable in intertemporal problems in economics. Notice that all time paths or curves begin

Figure 1.2
Image not available in HTML version

at time t = t0 at the point x = x0 and end at time t = t1 at the point x = x1, all four of which are given or fixed, thereby requiring that the paths being compared begin and end at the same position and time. The typical problem in dynamic optimization seeks to find a time path or curve x(t), or equivalently a function x(·), that, say, maximizes the objective function J[·]. Thus to each time path or curve xi (t), i = a, b, c, or function xi (·), i = a, b, c, there is a corresponding value of the objective function J[xi (·)], i = a, b, c.

The relationship between paths x(t) or functions x(·) and the resulting value J[x(·)] is quite different from that encountered in typical lower division mathematics courses. It represents a mapping from paths or curves to real numbers, or equivalently, from functions to real numbers, and therefore is not a mapping from real numbers to real numbers as in the case of functions. Such a mapping from paths or curves to path values, or from functions to real numbers, is what Figure 1.2 depicts and is called a functional. The general notation we shall employ for such a mapping from functions to real numbers is J[x(·)], which has been employed above. This notation emphasizes that the functional J[ · ] depends on the function x(·), or equivalently, on the entire curve x(t). Moreover, it highlights the fact that it is a change in the position of the entire path or curve x(t), that is, the variation in the path or curve x(t), rather than the change in t, that results in a change in the path value or functional J[ · ]. Thus, a dynamic optimization problem in continuous time seeks to find a path or curve x(t), or equivalently, a function x(·), that optimizes an objective functional J[ · ].

Next we consider in more detail the form of an archetype objective functional J[ · ]. Because the optimal solution to a continuous time dynamic optimization problem is a path or curve x(t), as noted above, associated with the path is its slope ⃞(t) ⃞ = dx(t)⁄dt at each point in time t in the planning horizon, assuming, of course, that the path x(t) is smooth enough so that ⃞(t) is defined. Suppose, moreover, that there exists a function, say, F(·), that assigns or imputes a value to the path and its associated derivative at each point in time in the planning horizon, the latter represented by the closed interval [t0, t1 ], 0 < t0 < t1. the imputed value of the path at each point in the planning horizon, therefore, depends on the moment of time t the decision is made and the value of the decision or choice variable at that time x(t), as well as on the slope at that time ⃞(t). Hence we have F(t, x(t),⃞(t)) as the value of the function that imputes a value to the path x(t) with slope ⃞(t) at time t. Because the path x(t) must necessarily travel through an interval of time, namely, the planning horizon, its total value as represented by the functional J[x(·)] is given by the "sum" of all the imputed values F(t, x(t), ⃞(t)) for each t in the planning horizon [t0, t1]. Moreover, because we are operating in continuous time, the appropriate notion of summation is represented by a definite integral over the closed interval [t0, t1]. Thus, the value of the functional we wish to optimize is given by
Display matter not available in HTML version

In economics, J[x(·)] often represents the present value of net benefits from pursuing the policy x(t), with instantaneous net benefits given by F(t, x(t), ⃞(t)). In general, one calls x(t) the state of the system, or value of the state variable, at time t, and ⃞(t) the rate of change of the system, or velocity, at time t. Typically, the explicit appearance of t as an argument of F(·) is a result of discounting in economic problems. Equation (3) represents the prototype form of the objective functional for calculus of variations problems, which are but one class of continuous time dynamic optimization problems. More general objective functionals will be introduced shortly, when we commence with the study of optimal control theory. But for now, the present discussion and motivation are sufficient, for the idea of a mapping from paths or curves to the real line and the form of the objective functional is what one must come away with.

Before moving on to some additional motivational material, a few remarks about the form of the objective functional in Eq. (3) are warranted. First, the functional J[ · ] is not, in general, the area under the curve x(t) between the points t = t0 and t = t1, the latter of which is represented by the integral t0t1 x(t)dt. Thus, in optimizing J[ · ], we are not making decisions to optimize the area under the curve x(t). Rather, we are picking paths x(t) such that the "sum" of the values imputed to the path x(t) and its derivative ⃞(t) at each point in time in the planning horizon is optimized. Second, the form of J[ · ] given in Eq. (3) is not the most general form of the objective functional for calculus of variations problems, but it is the most common or canonical in economic theory, as examples to be introduced latter will confirm. In motivating the form of J[ · ], for example, there is no particular reason why F(·) would not, in general, depend on the second or higher derivatives of x(t). What dictates which derivatives of the path x(t) that are relevant to the form of J[ · ] is the particular economic phenomena under study, and as we will shortly see, most of the problems of relevance to economists dictate the presence of ⃞(t) in F(·), but rarely higher derivatives.

The question you may now be wrestling with is: How does one know when to construct a dynamic model, as opposed to a static model, to study the economic events under investigation? the answer is most easily explained and motivated within the context of a simple example. Imagine an individual on an isolated island on which fresh water is available in an unlimited supply, and for which essentially no effort is required to collect the water. You may picture a brook or stream of fresh water passing next to the individual's hut. the only source of food is fish, which does require the expenditure of effort on the part of this individual. Clearly this individual can survive on this island, and the problem that this person faces is how much fish to catch each day for consumption.

Initially, let's assume that the harvested fish are impossible to store for any length of time, either because of the extreme heat of the environment or because of the lack of materials necessary to build a suitable storage facility. As this individual gets up on any day, the decision to be made is how much fish to catch for this day only, as storage has been ruled out. Any fish caught beyond the amount to be consumed that day simply rots and is wasted. Because fishing is costly to the individual, the catch on any day will not exceed the individual's consumption per day. Notice that the decision of how much fish to catch this day is independent of fish caught on previous days or fish expected to be caught on future days, as the lack of storage prevents any carryover of the fish. this lack of storage breaks any link between past decisions and the present decision, and any link between the present decision and future decisions. In other words, the absence of any durable asset or the inability of this person to store any of the asset (fish) renders current choices or actions independent of those made in the past, or those to be made in the future. For example, even if this person caught twice as many fish as could be consumed in a day, this would not relieve the individual of fishing the following day because the fish simply spoil, leaving zero edible fish for tomorrow. Thus the decision of how much fish to catch on any day is dependent only on the circumstances or environmental conditions of that day. As the reader may have guessed, this is exactly what dictates the decision problem faced by this individual as simply a sequence of static choice problems, each day's decision being independent of past and future decisions, and identical to that to be made on any other day, save for differing environmental conditions.

The reader may wonder why this situation is not a dynamic optimization problem since a sequence of optimal decisions must be made through time. It is not the sequence of decisions or the introduction of time per se that defines a dynamic choice problem but the link between past, present, and future decisions that makes a problem dynamic. In the scenario above, the lack of storage breaks this link, reducing the problem to a sequence of independent static optimization problems. So to have a dynamic optimization problem, there must be some systematic link between past, present, and future decisions.

Now let's assume that fish caught on any day in excess of that day's consumption can be stored. Because we are concerned here about the structure of a problem that makes it dynamic, the actual period ofime in which the fish can be stored or preserved is not important; the fact that fish caught on one day can be stored for future consumption is the important idea. Just as in the previous scenario, when this individual wakes up on any given day, a decision about how much fish to catch that day must be made. What is different, however, is that a stock of fish may exist in storage from the previous day's catch, and this stock must be taken into account in today's harvesting decision. Thus, the assumption of storage (or a durable good) provides a direct link between past decisions and the current decision, a link that was absent when storage was ruled out. Likewise, the decision to catch fish (or not) today impacts the amount of fish in storage for future consumption, and therefore impacts future decisions about catching fish. Storage provides a link between current decisions and future decisions as well. It is exactly this intertemporal linking of decisions that makes this second scenario a dynamic choice problem: decisions made in the past affect the current choice, which, in turn, affects future choices.

Although it may appear that in this simple example, there is only one variable, scilicet, fish, there are actually two: catching fish and fish in storage. The storage of fish is not a variable that is controlled directly by the individual; it responds to the amount caught, amount eaten, and time elapsed between catches. In macroeconomic terminology, the amount of fish in storage is a stock variable, or a state variable in the language of optimal control theory; that is, it is defined at a point in time, not over a period or length of time. The act of fishing, on the other hand, is defined over a period of time (a flow variable) and is directly under the control of the individual. In the language of optimal control theory, the catch rate of the fish is the control variable.

With the essence of a continuous time dynamic optimization problem now conveyed, let's turn to the motivation and basic mathematical setup of an optimal control problem. We therefore elect to proceed directly to optimal control theory rather than first formally introducing the calculus of variations and then optimal control theory.

Optimal control theory is based on a new way of viewing and formulating calculus of variations problems, and thus enables one to see them in a different light. In particular, optimal control theory often brings the economic intuition and content of a continuous time dynamic optimization problem to the surface more readily than does the calculus of variations, thereby enhancing one's economic understanding of the problem. It is this change of vista that makes optimal control theory a powerful tool for solving dynamic economic problems, for the calculus of variations can solve all problems that can be solved with optimal control theory, though not necessarily as easily, in which case both theories yield equivalent results. In fact, some textbooks, such as Hadley and Kemp (1971), develop the calculus of variations in its full generality and then use the results to prove those in optimal control theory.

The focus in optimal control theory is on some system. In economic problems, this may be the economy, an individual, or a firm. As is usual in intertemporal problems, we are interested in optimizing, in some specified sense, the behavior of the system through time. It is assumed that the manner in which the system changes through time can be described by specifying the time behavior of certain variables, say x(t) ∈ ℜN, called state variables, where t is the independent variable that we will almost always refer to as time. In general, the vector-valued function x( ·) : ℜ → ℜN is assumed to be a piecewise smooth function of time with not more than a finite number of corners. This means that the component functions xn (· ), n = 1,2, . . . , N, are continuous but that the derivative functions ⃞n (·), n = 1,2, . . . , N, are piecewise continuous in the sense that ⃞n (·) has at most a finite number of discontinuities on each finite interval with finite jumps (i.e., one-sided limits) at each point of discontinuity. In economic problems, a capital stock, a stock of money or any asset for that matter (e.g., wealth), a stock of fish, the amount of some mineral in the ground, the stock of water in an aquifer, the number of chairs in a classroom, or even the distribution function of a random variable may represent a state variable. Generally, the state variables are defined at a given point in time, as the aforementioned examples indicate. This is why state variables are often referred to as stocks by economists.

Before pressing on, it is prudent at this juncture to pause momentarily and give a precise definition of a piecewise continuous function and a piecewise smooth function, for such functions will be encountered with some regularity in optimal control theory. To that end, we have the following definition.

Definition 1.1: A function φ (·) is said to be piecewise continuous on an interval α ≤ tβ if the interval can be partitioned by a finite number of points α = t0 < t1 < ... < tK = β so that

  1. φ (·) is continuous on each open subinterval tk-1 < t < tk, k = 1,2, . . . , K, and

  2. φ (·) approaches a finite limit as the end points of each subinterval are approached from within the subinterval.

In other words, a function φ (·) is piecewise continuous on an interval α ≤ tβ if it is continuous there except for a finite number of jump discontinuities. An example of a piecewise continuous function is shown in Figure 1.3. Given this definition, it is now a relatively simple matter to define a piecewise smooth function.

Definition 1.2: A function φ (·) is said to be piecewise smooth on an interval α ≤ tβ if its derivative function φ (· ) is piecewise continuous on the interval α ≤ tβ .

This definition therefore implies that the derivative of a piecewise smooth function is piecewise continuous, and that the integral of a piecewise continuous function is piecewise smooth.

It is also assumed that there exists another class of variables known as control variables, say u(t) ∈ ℜM, where MN in general. The control variables may undergo jump changes and are therefore only restricted to be piecewise continuous in general, that is, the function u(·) : ℜ → ℜM is assumed to be a piecewise continuous

Figure 1.3
Image not available in HTML version

function of time. In economic problems, control variables are usually represented by flow variables that are typically defined over an interval of time. Archetypal control variables in economic problems include the investment rate in an asset, the consumption rate of a good, and the harvest rate of a resource. Control variables are those variables that the decision maker has explicit control of in the optimal control problem, as the name implies. Control variables are thus the analogues of the choice or decision variables in static optimization theory. The control variables will not, in general, be allowed to take on arbitrary values. Generally, it is assumed that for each t in the planning horizon, the control variables are restricted to vary in a fixed and prespecified set U ⊆ ℜ M, called the control set or control region. It is typically required that u(t) ∈ U for the entire planning horizon. The control set U may be any fixed set in ℜM, say, an open or closed set, but is not restricted in any special way. Of particular importance is the case in which U is a closed set in ℜM. In this case, the control variables are allowed to take values at the boundary of the control set, a situation that the classical calculus of variations cannot handle so easily. In typical economic problems, the control set may be represented by nonnegativity restrictions on the control variables or by fixed inequality constraints that bound the control variables. Finally, it is assumed that every variable of interest can be classified as a state variable or a control variable.

In view of the restrictions placed on the two classes of variables, we may think of the control variables as governing not the values of the state variables, but their rate of change. More specifically, it will be assumed that the dependence of the state variables on the control variables can be described by a first-order differential equation system, namely,
Display matter not available in HTML version

in vector notation, or
Display matter not available in HTML version

in index notation, known as the state equation. The transition functions gn(·), n = 1,2, . . . , N, are given functions that describe the dynamics of the system. In general, the rate of change of each state variable depends on all of the state variables, all of the control variables, various economic and technical parameters, and explicitly on time t, though the specific problem under consideration will dictate the exact form of the state equation and the variables appearing in it. The explicit dependence of the transition functions on t allows for the evolution of the state variables to depend on important exogenous factors, such as technological progress. Furthermore, suppose that the state of the system is known at time t0, so that x(t0 ) = x0, where x0 ∈ ℜN is a given vector. If the time path of the control variables is specified by a certain control function, say, u( ·), defined for tt0, and we substitute it into the state equation, we obtain a system of N first-order ordinary differential equations for the N unknown functions xn (·), n = 1,2, . . . , N. Because the initial value x0 ∈ ℜN of the state variable is given, the state equation will have a unique solution x(t) under rather mild assumptions, given by the fundamental existence and uniqueness theorem for ordinary differential equations. This solution is represented geometrically by a curve in ℜN. Because this solution is essentially a response to the control function u(·), it would be appropriate to denote it by xu (t), but we will drop the subscript u as is customarily done. Clearly, we could have selected another control function, and a corresponding time path of the state variable would be generated. Thus, in general, for each control function selected, there corresponds a path for the state variables that represents the solution to the state equation and initial condition. As a result of this observation, it follows that the control and state variables are essentially paired, in that once a control function is specified, the corresponding time path of the state variables is completely determined via the state equation.



© Cambridge University Press

Table of Contents

1. Essential elements of continuous time dynamic optimization; 2. Necessary conditions for a simplified control problem; 3. Concavity and sufficiency in optimal control problems; 4. The maximum principle and economic interpretations; 5. Linear optimal control problems; 6. Necessary and sufficient conditions for a general class of control problems; 7. Necessary and sufficient conditions for isoperimetric problems; 8. Economic characterization of reciprocal isoperimetric problems; 9. The dynamic envelope theorem and economic interpretations; 10. The dynamic envelope theorem and transversality conditions; 11. Comparative dynamics via envelope methods; 12. Discounting, current values, and time consistency; 13. Local stability and phase portraits of autonomous differential equations; 14. Necessary and sufficient conditions for infinite horizon control problems; 15. The neoclassical optimal economic growth model; 16. A dynamic limit pricing model of the firm; 17. The adjustment cost model of the firm; 18. Qualitative properties of infinite horizon optimal control problems with one state variable and one control variable; 19. Dynamic programming and the Hamilton-Jacobi-Bellman equation; 20. Intertemporal duality in the adjustment cost model of the firm.
From the B&N Reads Blog

Customer Reviews