Stationary Stochastic Processes. (MN-8):

Stationary Stochastic Processes. (MN-8):

by Takeyuki Hida

Paperback

$30.95
View All Available Formats & Editions
Eligible for FREE SHIPPING
  • Want it by Wednesday, October 17?   Order by 12:00 PM Eastern and choose Expedited Shipping at checkout.

Overview

Stationary Stochastic Processes. (MN-8): by Takeyuki Hida

Encompassing both introductory and more advanced research material, these notes deal with the author's contributions to stochastic processes and focus on Brownian motion processes and its derivative white noise.

Originally published in 1970.

The Princeton Legacy Library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These editions preserve the original texts of these important books while presenting them in durable paperback and hardcover editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.

Product Details

ISBN-13: 9780691621418
Publisher: Princeton University Press
Publication date: 03/08/2015
Series: Princeton Legacy Library Series
Pages: 174
Product dimensions: 6.90(w) x 10.00(h) x 0.90(d)

Read an Excerpt

Stationary Stochastic Processes


By Takeyuki Hida

Princeton University Press and University of Tokyo Press

Copyright © 1970 Princeton University Press
All rights reserved.
ISBN: 978-0-691-08074-1



CHAPTER 1

STATIONARY STOCHASTIC PROCESSES WHITE NOISE


§.0. Introduction

The ideas presented in this course were inspired by certain investigations of stationary stochastic processes using nonlinear operators acting on them, e.g. nonlinear prediction theory and the discussions about innovations for a stationary process.

We shall be interested in functionals of certain basic or fundamental stationary stochastic processes. In the discrete parameter case, they are the familiar sequences of independent identically distributed random variables. In the continuous parameter case, which concerns us, they are stationary processes with independent values at every moment in the sense of Gelfand. A particularly important role will be played by the so-called (Gaussian) white noise and also a generalized white noise. Before studying such processes, we will need to treat some preliminary matters. For example, white noise is not a stochastic process in the usual sense. Therefore we shall be led to give a new definition for a stationary stochastic process.

Once our general set-up has been established, we proceed to the analysis on Hilbert space arising from a stationary process with independent values at every moment. A detailed discussion can be given only in the case of white noise, when most of the results come from the classical theory for Wiener space and from the theory of Brownian motion. The greater part of the classical theory of stationary processes can be handled in our scheme.

We are in a position to analyze a given stationary process X(t). We are motivated by the following formal expression due to P. Lévy:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where Φ is a certain functional and {[xi]t} forms a system of independent random variables. This equation is hard to illustrate rigorously, but on an intuitive level it is full of meaning. For example {[xi]t} should be regarded as a process with independent values at every moment and plays the role of the marginal random variables which measure the information at each moment. Following along the lines suggested by the above expression, we shall consider not only functionals of [xi]t but also the flows by which [xi]t is transformed into other processes of the same type. In particular, we shall study flows which will serve to characterize the type of the measure induced by the given stationary process with independent values at every moment. In this connection we shall find that our study is closely related to the representation theory of Lie groups.

Finally we shall discuss some applications to physics and engineering where infinite dimensional analysis is required.


Part I

§.1. Background

In this article we shall prepare some basic concepts as background for our discussions.


1.1 Probability space

Let Ω be a certain non-empty set. Each element ω of Ω is supposed to be an elementary event or a probability parameter. Let (B) be a σ-field of subsets of Ω, i.e., (B) satisfies

i) If (B) [contains as member] An for n = 1, 2, ..., then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

ii) If (B) [contains as member] A, then Ac [equivalent to] Ω - A [member of] (B)

iii) (B) [contains as member] [empty set] (empty set).


A set belonging to (B) is called an event or a measurable set (w. r. t. (B)). A (countably additive) measure P defined on (B) is called a probability measure if P(Ω) = 1. The number P(A) is called the probability of the event A.

A set Ω with elements ω, together with a σ-field (B) and a probability measure P on (B) constitutes a probability space. We denote it by Ω((B), P) or (Ω, (B), P)or simply by Ω.

If An for n = 1, 2, ..., belong to (B), the following sets again belong to (B):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

inferior limit of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

superior limit of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],


(We sometimes refer to [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] as the event that the occur An's occur infinitely often).


1.2. Random variable and probability distribution.

A real-valued function X(ω) on Ω is called a random variable (r.v.) on the probability space (Ω, (B), P) if X is measurable ω.r.t. the ω-field (B). Given a r.v. X, a probability measure Px is naturally defined on the measurable space (R', (B) (R')) in the following manner:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

where R' = (- ∞, ∞) and (B) (R') is the usual Borel field. Thus defined, Px is called the probability distribution of the r.v. X.

In a similar manner we can define a complex-valued or vector-valued r.v. and its probability distribution.


Remarks. We will also give more general definitions of a r.v. and. its probability distribution, which play important roles in the later articles.

If X is a real r.v., the expectation or mean of X is the integral

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

provided X is integrable. The moment and the n-th absolute moment of X are the expectations of Xn and of [absolute value of X]n respectively. E[X - E(X)]2 is called the variance of the r.v. X.


1.3. Sequence of Events and r.v.'s.

We are interested in sequences of events and of r.v.'s which are independent.

i) Events At (where t runs over a not necessarily finite set

T) to be independent if, for any finite subset (t1, ..., tn) of T,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

ii) σ-fields (G)t, (where t [member of] T and (C)t [subset] (B)) are said to be independent if the events At are independent for any choice of At [member of] (C)t.

If the σ-fields (C)t are independent, then any sub-σ-fields (C)t with (C)t[subset] (C)t are also independent.

iii) r.v. 'S Xt on the same probability space are said to be independent if the (B)(Xt) are independent, where (B)(Xt) is the smallest σ-field with respect to which Xt is measurable.

If r.v.'s Xt are independent, we can easily prove the following equations:

(1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for any finite subset (t1, ..., tn) and for any [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] in (B)(R').

(2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

for any finite subset (t1, ..., tn) provided [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For a system of independent r.v. 's with finite variances we can give a geometric representation, since each r.v. can be regarded as an element of the Hilbert space L2 (Ω,P). The mean and the second order absolute moment of a r.v. X correspond to the inner product (X,l) and the square of the norm [parallel]X[parallel]2 in L2(Ω,P) respectively. Independent r.v.'s with zero mean are orthogonal to each other in L2 (Ω,). If a sequence Xn converges to X in L2 (Ω,P), we say that Xn converges to X in the mean-square or m.q. (moyenne quadratique).

Example. Let {Xn, n = 1,2, ...} be a Gaussian system, i.e., the joint distribution of any finite collection of the Xn's is Gaussian. Assume that E(Xn) = 0 for every n. Since Xn has finite variance, Xn [member of] L2 (Ω, P). Applying the Schmidt's orthogonalization we obtain the following representation:

(3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Where is {[xi]k} an ortho-normal system in L2 (Ω, P). It is easy to see that {[xi]k]} forms a Gaussian system. We note that orthogonality is equivalent to independence for Gaussian r.v.'s. Therefore {[xi]k} must be a system of independent Gaussian random variables with the common distribution N(0,1) (Gaussian distribution with mean 0 and variance l).

Having obtained the representation (3), we can easily discuss the prediction theory and related questions for {Xn}. This suggests to us the investigation of P. Lévy's representation of a Gaussian process.

Next we consider a countable system of events {An}.


Theorem 1.1. (Borel-Cantelli)

I. If [summation over n] P(An) < ∞, then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

II. If the An are independent, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] or 1 according as [summation over n] P(An) < ∞ or = + ∞.

Proof. I is easily proved.

For the proof of II let us assume that [summation over n] P(An) = + ∞.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Our assumption implies that the product in the above expression approaches 0 as n -> ∞. Therefore

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The rest of the proof is the same as I.

If the An are independent and if, in particular, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] exists, then P(A) must be 0 or 1.

As an application of I we have the following proposition:

if there exist sequences [member of]n ≥ 0 and ηn ≥ 0 such that [summation over n] [member of]n< ∞ and [summation over n] ηn< ∞, and if [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] holds for every n, then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] exists for almost all ω.

For a sequence of r.v.'s Xn on (Ω, (B), P) we introduce the σ-field (B)n = B (Xn, Xn+1, ...), the smallest σ-field with respect to which all of the Xm with m ≥ n are measurable. Since the (B)n decrease as n increases, the limit

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

exists and is again a σ-field. (C) is called the tail σ-field of the sequence Xn. A function is said to be a tail function if it is measurable with respect to (C).

Examples of tail functions

a) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

b) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].


Theorem 1.2. (Zero-One Law)

Let Xn, n = 1, 2, ..., be a sequence of independent r.v.'s. Then the tail σ-field is equivalent to the trivial field {[empty set], Ω} i.e. each member of the tail σ-field has measure 0 or 1.

Proof. The tail σ-field (C), being a sub-a-field of (B) (Xn+1, Xn+2, ...), is independent of (B) (X1, ..., Xn). Since this is true for every n, also (C) is independent of (B) (X1, X2, ...), which includes (C). Hence (C) is independent of itself. Thus, for any event A in (C),

P(A) = P(A [intersection] A) = P(A) • P(A),

and therefore P(A) = 0 or 1.

Corollary. Let Xn be a sequence of independent r.v.'s. Then Xn converges a.s. or diverges a.s., and similarly for the sum [summation over n] Xn.

Two sequences Xn and Yn defined on the same probability space are called tail equivalent if they differ a.s. only by a finite number of terms. If [summation over n] P (Xn ≠ Yn) < ∞, then the sequences are tail equivalent (Borel-Cantelli theorem I). In this case the [summation] Xn and [summation] Yn are convergence equivalent, i.e., the sets on which they converge differ by a set of measure 0. Thus, so far as questions of convergence are concerned, these series can be used interchangably.


1.4. Law of large numbers.

We begin with two estimates of probabilities in terms of moments.

i) Tchebichev inequality. Let g(x) be an even non negative function which is positive and non-decreasing on (0, ∞). Then for any positive [member of], we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

ii) Kolmogrov inequality. Let Xn be a sequence of independent r.v.'s with finite means.Set Sn = [n.summation over 1] Xn, for any positive [member of], we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where V denotes the variance.

Tchebichev inequality follows from the elementary computation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Kolmogorov inequality is more subtle. To prove it, we assume that E(Xk) = 0 for all k and we set

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Then we have

Bk = Ak-1 - Ak, and

Acn = [n.summation over (k=1)] Bk (direct sum).

Consider the integral

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

(The crucial third step uses the independence of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and Sn - Sk.)

Summing over k = 1,2, ..., n, we have

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Suppose that the Xn are independent and identically distributed with finite variance. Then Tchebichev inequality gives

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

where m is the mean of Xn. Kolmogorov inequality yields the following stronger result:


Theorem 1.3- If the r.v.'s Xn are independent and have finite variances, then [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] with increasing positive bn implies that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

In particular we have the following classical form of the strong law of large numbers.

Corollary. If the independent r.v.'s Xn are identically distributed with finite mean m and finite variance, then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The corollary is still true if we drop the assumption of Finite variance, but an additional argument is required.


§.2. Brownian motion.

First we shall define a standard Brownian motion and give one possible construction for it. Then we shall see that Brownian motion determines a measure on function space.


2.1. Definition of Brownian motion

A system of is called r.v.'s B (t, ω), ω [member of] Ω, 0 ≤ t < ∞, is called a standard Brownian motion if it satisfies the following conditions:

i) The system {B (t); 0 ≤ t < ∞} is Gaussian

ii) B (0) = 0, a.s.

iii) The probability distribution of B(t) - B(s) is N (0, [absolute value of t - s]).

From iii) it follows that E (B (t)) for every t.

Also for u < s < t, the relation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Together with condition iii) shows that B(t) - B(s) and B(s) - B(u) are independent. Thus E(B(t) B(s)) = min (t,s). Similarly, it can be shown that {B(t1+1) - B (ti); t1< t2< ... < tn} is a system of independent Gaussian r.v.'s and therefore

(1) B(t) - B(s) with s < t is independent of the system (B(u); u ≤ s). In other words, B(t) is an additive stochastic process.


Remarks. Property (l) suggests that d/dt X(t), if it exists in a suitable sense, is a system of independent r.v.'s. We will discuss this in §.5.


The conditions i), ii), iii) determine the probability distribution of (B(t1), ..., B(tn)) for any finite subset (t1, ..., tn) of [0,∞). The system of these distributions satisfies the necessary consistency conditions for the Kolmogorov extension theorem; this assures the existence of a standard Brownian motion with Ω = R[0, ∞). However we prefer to give a direct construction in order to clarify certain important properties.


(Continues...)

Excerpted from Stationary Stochastic Processes by Takeyuki Hida. Copyright © 1970 Princeton University Press. Excerpted by permission of Princeton University Press and University of Tokyo Press.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

  • Frontmatter, pg. i
  • Preface, pg. iii
  • Contents, pg. iv
  • 0. Introduction, pg. 1
  • 1. Background, pg. 3
  • 2. Brownian motion, pg. 14
  • 3. Additive processes, pg. 31
  • 4. Stationary processes, pg. 62
  • 5. Gaussian processes, pg. 86
  • 6. Hilbert space(L2) arising from white noise, pg. 94
  • 7. FLow of the Brownian motion., pg. 106
  • 8. Infinite dimensional rotation group, pg. 113
  • 9. Fourier Analysis on (L2 ) , motion group and Laplacian, pg. 120
  • 10. Applications, pg. 135
  • 11. Generalized White Noise, pg. 147
  • Appendix, pg. 163



Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews