Controlled Diffusion Processes / Edition 1

Controlled Diffusion Processes / Edition 1

by A.B. Aries, N.V. Krylov
     
 

ISBN-10: 0387904611

ISBN-13: 9780387904610

Pub. Date: 12/07/1980

Publisher: Springer New York

This book deals with the optimal control of solutions of fully observable Ito-type stochastic differential equations. The validity of the Bellman differential equation for payoff functions is proved and rules for optimal control strategies are developed.

Overview

This book deals with the optimal control of solutions of fully observable Ito-type stochastic differential equations. The validity of the Bellman differential equation for payoff functions is proved and rules for optimal control strategies are developed.

Product Details

ISBN-13:
9780387904610
Publisher:
Springer New York
Publication date:
12/07/1980
Series:
Stochastic Modelling and Applied Probability Series, #14
Edition description:
1980
Pages:
308
Product dimensions:
6.10(w) x 9.25(h) x 0.03(d)

Table of Contents

Notation xi

1 Introduction to the Theory of Controlled Diffusion Processes 1

1 The Statement of Problems-Bellman's Principle-Bellman's Equation 2

2 Examples of the Bellman Equations-The Normed Bellman Equation 7

3 Application of Optimal Control Theory-Techniques for Obtaining Some Estimates 16

4 One-Dimensional Controlled Processes 22

5 Optimal Stopping of a One-Dimensional Controlled Process 35

Notes 42

2 Auxiliary Propositions 45

1 Notation and Definitions 45

2 Estimates of the Distribution of a Stochastic Integral in a Bounded Region 51

3 Estimates of the Distribution of a Stochastic Integral in the Whole Space 61

4 Limit Behavior of Some Functions 67

5 Solutions of Stochastic Integral Equations and Estimates of the Moments 77

6 Existence of a Solution of a Stochastic Equation with Measurable Coefficients 86

7 Some Properties of a Random Process Depending on a Parameter 91

8 The Dependence of Solutions of a Stochastic Equation on a Parameter 102

9 The Markov Property of Solutions of Stochastic Equations 110

10 Ito's Formula with Generalized Derivatives 121

Notes 128

3 General Properties of a Payoff Function 129

1 Basic Results 129

2 Some Preliminary Considerations 140

3 The Proof of Theorems 1.5-1.7 147

4 The Proof of Theorems 1.8-1.11 for the Optimal Stopping Problem 152

Notes 161

4 The Bellman Equation 163

1 Estimation of First Derivatives of Payoff Functions 165

2 Estimation from Below of Second Derivatives of a Payoff Function 173

3 Estimation from Above of Second Derivatives of a Payoff Function 181

4 Estimation of a Derivative of a Payoff Function with Respect to t 188

5 Passage to the Limit in the Bellman Equation193

6 The Approximation of Degenerate Controlled Processes by Nondegenerate Ones 200

7 The Bellman Equation 203

Notes 211

5 The Construction of [epsilon]-Optimal Strategies 213

1 [epsilon]-Optimal Markov Strategies and the Bellman Equation 213

2 [epilson]-Optimal Markov Strategies. The Bellman Equation in the Presence of Degeneracy 218

3 The Payoff Function and Solution of the Bellman Equation: The Uniqueness of the Solution of the Bellman Equation 228

Notes 243

6 Controlled Processes with Unbounded Coefficients: The Normed Bellman Equation 245

1 Generalizations of the Results Obtained in Section 3.1 245

2 General Methods for Estimating Derivatives of Payoff Functions 254

3 The Normed Bellman Equation 266

4 The Optimal Stopping of a Controlled Process on an Infinite Interval of Time 275

5 Control on an Infinite Interval of Time 285

Notes 291

Appendices

1 Some Properties of Stochastic Integrals 293

2 Some Properties of Submartingales 299

Bibliography 303

Index 307

Customer Reviews

Average Review:

Write a Review

and post it to your social network

     

Most Helpful Customer Reviews

See all customer reviews >