Auxiliary Signal Design for Failure Detection

Auxiliary Signal Design for Failure Detection

ISBN-10:
0691099871
ISBN-13:
9780691099873
Pub. Date:
02/15/2004
Publisher:
Princeton University Press
ISBN-10:
0691099871
ISBN-13:
9780691099873
Pub. Date:
02/15/2004
Publisher:
Princeton University Press
Auxiliary Signal Design for Failure Detection

Auxiliary Signal Design for Failure Detection

$84.0 Current price is , Original price is $84.0. You
$84.00 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Overview

Many industries, such as transportation and manufacturing, use control systems to insure that parameters such as temperature or altitude behave in a desirable way over time. For example, pilots need assurance that the plane they are flying will maintain a particular heading. An integral part of control systems is a mechanism for failure detection to insure safety and reliability.


This book offers an alternative failure detection approach that addresses two of the fundamental problems in the safe and efficient operation of modern control systems: failure detection—deciding when a failure has occurred—and model identification—deciding which kind of failure has occurred. Much of the work in both categories has been based on statistical methods and under the assumption that a given system was monitored passively.


Campbell and Nikoukhah's book proposes an "active" multimodel approach. It calls for applying an auxiliary signal that will affect the output so that it can be used to easily determine if there has been a failure and what type of failure it is. This auxiliary signal must be kept small, and often brief in duration, in order not to interfere with system performance and to ensure timely detection of the failure. The approach is robust and uses tools from robust control theory. Unlike some approaches, it is applicable to complex systems. The authors present the theory in a rigorous and intuitive manner and provide practical algorithms for implementation of the procedures.


Product Details

ISBN-13: 9780691099873
Publisher: Princeton University Press
Publication date: 02/15/2004
Series: Princeton Series in Applied Mathematics , #11
Pages: 224
Product dimensions: 6.00(w) x 9.25(h) x (d)

About the Author

Stephen L.Campbell is Professor of Mathematics at North Carolina State University. Ramine Nikoukhah is Senior Scientist (Directeur de Recherche) at Institut National de Recherche en Informatique et en Automatique (INRIA) in France.

Read an Excerpt

Auxiliary Signal Design for Failure Detection


Chapter One

Introduction

1.1 THE BASIC QUESTION

In this book, we study the problem of active failure detection in dynamical systems. Failure detection in some form is now part of essentially every complex device or process. In some applications the detection of failures, such as water losses in nuclear reactors or engine problems on an aircraft, is important for safety purposes. The detection of failure leads to emergency actions by operators or computers and results in a quick and controlled shutdown of the system. In other situations, such as on space missions, the detection of failures results in the use of back-up or alternative systems. These are the more dramatic examples and are often what one first thinks of when hearing of a failure. But in today's society failure detection also plays a fundamental role in managing costs, promoting efficiency, and protecting the environment. It is often much more economical to repair a part during a scheduled maintenance than to have a breakdown in the field. For example, failure may mean that a part or subsystem is not performing to specification, resulting in increased fuel consumption. Detecting this failure means that a scheduled repair can be made with savings ofboth resources and money. Failure can also mean that a part or subsystem is not performing as expected and that if allowed to continue the result could be a catastrophic failure. But again detection of this type of failure means that repairs can be initiated in an economical and convenient manner. It is much easier to repair a weakened pump than to have to clean up a major sewage spill.

A number of specific examples from applications are in the cited literature. In chapters 2 and 3 we shall use a couple of intuitive examples to motivate some of the ideas that follow. Simpler academic examples will be used to illustrate most of the key ideas and algorithms. Then in the later chapters we shall include some more detailed examples from application areas.

Because of the fundamental role that failure detection plays, it has been the subject of many studies in the past. There have been numerous books [3, 69, 4, 26, 39, 70, 27] and survey articles [83, 44, 1, 38, 34, 68, 35, 36, 37] dedicated to failure detection. The book by Chen and Patton [26] in particular gives an up to date overview of the subject.

Most of these works are concerned with the problem of passive failure detection. In the passive approach, for material or security reasons, the detector has no way of acting upon the system. Rather, the detector can only monitor the inputs and the outputs of the system and then try to decide whether a failure has occurred, and if possible of what kind. This decision is made by comparing the measured input-output behavior of the system with the "normal" behavior of the system. The passive approach is often used to continuously monitor the system, although it can also be used to make periodic checks. One simple example of a passive failure detection system is the one that monitors the temperature of your car engine. If the engine gets too hot a warning light may come on. The detector does nothing but passively estimate the engine temperature and compare it to the maximum allowable temperature.

A major drawback with the passive approach is that failures can be masked by the operation of the system. This is true, in particular, for controlled systems. The reason for this is that the purpose of controllers, in general, is to keep the system at some equilibrium point even if the behavior of the system changes. This robustness property, which is clearly desired in control systems, tends to mask abnormal behaviors of the system. This makes the task of failure detection difficult, particularly if it is desired to detect failures that degrade performance. By the time the controller can no longer compensate for the failure, the situation may have become more severe, with much more serious consequences. An example of this effect is the well known fact that it is harder for a driver to detect an under inflated or flat front tire in a car that is equipped with power steering. This trade-off between detection performance and controller robustness has been noted in the literature and has led to the study of the integrated design of controller and detector. See, for example, [60, 80]. A more dramatic example occurred in 1987 when a pilot flying an F-117 Nighthawk, which is the twin-tailed aircraft known as the stealth fighter, encountered bad weather during a training mission. He lost one of his tail assemblies but proceeded back and landed his plane without ever knowing that he was missing part of the tail. The robustness of the control system in this case had the beneficial effect of enabling the pilot to return safely. However, it also had the effect that the pilot did not realize that his aircraft had reduced capability and that the plane would not have performed correctly if a high-speed maneuver was required.

But the problem of masking of failures by the system operation is not limited to controlled systems. Some failures may simply remain hidden under certain operating conditions and show up only under special circumstances. For example, a failure in the brake system of a truck is very difficult to detect as long as the truck is cruising along the road on level ground. It is for this reason that on many roads, just before steep downhill stretches, there are signs asking truck drivers to test their brakes. A driver who disregarded these signs would find out about a brake failure only when he needed to brake going downhill. That is, too late to avoid running off the road or having an accident.

An alternative to passive detection, which could avoid the problem of failures being masked by system operation, is active detection. The active approach to failure detection consists in acting upon the system on a periodic basis or at critical times using a test signal in order to detect abnormal behaviors which would otherwise remain undetected during normal operation.

The detector in an active approach can act either by taking over the usual inputs of the system or through a special input channel. An example of using the existing input channels is testing the brakes by stepping on the brake pedal. One class of applications using special channels is when the system involves a collection of pipes or tubes and a fluid or gas is being pumped through the pipes. A substance is injected into the flow in order to determine flow characteristics and pipe geometry. A specific example is the administration of dyes using intravenous injection when conducting certain medical imaging studies. The imaging study lasts for a certain period of time. Since many people react to the dyes, it is desired to keep both the total amount of dye and the rate at which the dye is injected small, consistent with getting sufficient additional resolution.

The active detection problem has been less studied than the passive detection problem. The idea of injecting a signal into the system for identification purposes, that is, to determine the values of various physical parameters, has been widely used and is a fundamental part of engineering design. But the use of extra input signals specifically in the context of failure detection was introduced by Zhang [90] and later developed by Kerestecioglu and Zarrop [49, 48, 50]. These works served as part of the initial motivation for our study. However, these authors consider the problem in a very different context, which in turn leads to mathematical problems that are very different from those we consider in this book. Accordingly, we have chosen not to review them here.

There are major efforts under way in the aerospace and industrial areas to try to get more extended and more autonomous operation of everything from space vehicles to ships at sea. Regular and extensive maintenance is being replaced by less frequently scheduled maintance and smaller crews. This is to be accomplished by large numbers of sensors and increased software to enable the use of "condition-based maintenance." Active failure detection will play an increasingly important role both in the primary system and in back-up systems to be used in the case of sensor failures.

Before beginning the careful development in chapter 2, we will elaborate a little more on the ideas we have just introduced in this section.

1.2 FAILURE DETECTION

Failure detection consists of deciding whether a system is functioning properly or has failed. This decision process is based on measurements of some of the inputs and outputs of the system. In most cases, these measurements are obtained from sensors placed on or around the system and from knowledge of some of the control inputs.

Given the measurements, the problem is then to decide if the measurement data are consistent with "normal functioning" of the system.

There are two ways of approaching this problem. One is to define a set of input-output trajectories consistent with normal operation of the system. These trajectories are sometimes called the behavior of the system. Failure detection then becomes some type of set inclusion test. The other approach consists of assigning a probability to each trajectory and then using probabilistic arguments to build a test. But even in the first approach the notion of probability is often present because without it there is in general no way of defining a set of normal trajectories without being overly conservative. What we often do is to exclude "unlikely" trajectories from this set by selecting an a priori threshold on the likelihood of the trajectories that we admit into the set. Indeed, under the assumption that the observations result from the model, an abnormal behavior is nothing but an unlikely event. There are numerous variations on these two approaches. The choice of which to use is influenced by the nature of the problem.

In model-based failure detection, the normal (nonfaulty) behavior of the system is characterized using a mathematical model , which can be deterministic or stochastic. This model then defines an analytical redundancy between the inputs and the outputs of the system which can be tested for failure detection purposes. The use of analytical redundancy in the field of failure detection originated with the works of Beard [5] and Jones [46], and of Mehra and Peschon [59] in the stochastic setting. A good picture of the early developments is given in the survey by Willsky [83].

This book develops a model-based approach for several classes of models which consist of differential, difference, and algebraic equations. Accurate models of real physical systems can become quite complex, involving a variety of mathematical objects including partial differential equations (PDEs). However, it is intrinsic to the problem of failure detection that the tests have to be carried out either in real time or close to it. The whole point of failure detection is to determine that a failure has occurred in time to carry out some type of remedial action. Thus, while some calculations, such as design of the detector, can be done off-line, the actual detection test must usually be able to be carried out on-line. To accomplish this, the model used for failure detection purposes in most cases is linear. Nonlinear effects are often included in the noise and model uncertainty effects. In addition, either the models are finite dimensional, or in the case of differential equations, the dimension of the state space is finite. This often requires some type of approximation process if the true underlying models are infinite dimensional. We illustrate this in chapter 4 when we consider differential equation models that include delays.

In the simplest case of a model given by a dynamical system, we would have a deterministic system with a known initial condition. In this case the set of normal behaviors would be reduced to a single trajectory. The failure detection test in this case would be very simple, but this situation does not correspond to real-life cases encountered in practice.

A first step in building more realistic model-based normal behavior sets is to consider that the initial condition of the model, characterizing the behavior set, is unknown. To illustrate, suppose that for a continuous-time differential dynamical system, all the information we have is summarized in the system equations

[??] = A[x] + Bu, (1.2.1a) y = C[??] + Du, (1.2.1b)

where u and y are, respectively, the measured input and output of the system, x is the state, and A,B,C,D are considered known. In the corresponding discrete-time case, the system equations would be

x(k + 1) = A]x](k) + Bu(k), (1.2.2a) y(k)= Cx(k) + Du(k). (1.2.2b)

This way of introducing uncertainty in the model is reasonable because the initial condition of the model usually corresponds to the internal state of the system, which is not directly measured. This approach also leads to simple tests for the inclusion of observed data in the set of normal behaviors. For example, in the discrete-time case, the set of input-outputs satisfying (1.2.2) can be characterized in terms of a set of linear dynamical equations involving only measured quantities u and y. To illustrate, suppose we denote the time shift operator by z. Then the system (1.2.2) can be expressed as follows (using the shift operator is equivalent to taking the z transform of the system, which is the discrete analogue of taking a Laplace transform of a continuous-time system):

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.2.3)

Thus if H(z) is any polynomial matrix in z such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.2.4)

and the matrix-valued polynomial G(z) is defined by

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.2.5)

then the relation

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1.2.6)

must hold. The analytical redundancy relations (1.2.6) are also called parity checks. They are easy to test at every time step, but they are not unique. In the actual implementation of this approach, the choice of the test is made so as to account for unmodeled model uncertainties, and the result is tested against a threshold. See [53] for one such approach.

The other main method for testing the inclusion of observed data in the set of normal behaviors is to use an observer. Observers play a fundamental role in control theory. Given a dynamical system, an observer is a second dynamical system which takes the inputs and outputs of the first system as inputs and whose state (or output) asymptotically approaches the state (part of the state) of the first system. This convergence takes place independently of the initial conditions of the original system and the observer system. If the model is assumed to be perfectly known and only the initial condition is unknown, the observer residual, which is the difference between the measured output and its prediction based on past measurements, converges exponentially to zero. Thus this residual can be used for failure detection testing. Such tests are called observer based. In the continuous-time setting, for example, the observer-based residual generator for system (1.2.1) can be constructed as follows:

[??] = A]??] + Bu - L(y - ITL??]), [??](0) = 0, r = y - ITL??] - Du,

where r denotes the residual and L is a matrix chosen such that A + LC is Hurwitz (all eigenvalues have a negative real part) to assure the convergence of r to zero.

In practice, the residual is tested against a threshold to account for uncertainties. The freedom in the choice of the observer is used for robustness purposes. One such method can be found in [28].

It turns out that the observer-based detection and parity check methods, which historically have been developed independently, are in fact very similar. As is shown in [56], in the discrete-time case, the parity check test is equivalent to an observer-based test where the observer is taken to be deadbeat (L is chosen so that A + LC is nilpotent).

(Continues...)



Excerpted from Auxiliary Signal Design for Failure Detection by Stephen L. Campbell Ramine Nikoukhah Copyright © 2004 by Princeton University Press. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Preface vii

Chapter 1. Introduction 1

1.1 The Basic Question 1

1.2 Failure etection 3

1.3 Failure Identification 9

1.4 Active Approach versus Passive Approach 10

1.5 Outline of the Book 13

Chapter 2. Failure Detection 14

2.1 Introduction 14

2.2 Static Case 15

2.3 Continuous-Time Systems 25

2.4 iscrete-Time Systems 36

2.5 Real-Time Implementation Issues 42

2.6 Useful Results 44

Chapter 3. Multimodel Formulation 59

3.1 Introduction 59

3.2 Static Case 60

3.3 Continuous-Time Case 76

3.4 Case of On-line Measured Input 90

3.5 More GeneralCost Functions 92

3.6 iscrete-Time Case 99

3.7 Suspension Example 102

3.8 Asymptotic Behavior 111

3.9 Useful Results 112

Chapter 4. Direct Optimization Formulations 122

4.1 Introduction 122

4.2 Optimization Formulation for Two Models 123

4.3 General-ModelCase 138

4.4 Early etection 142

4.5 Other Extensions 150

4.6 Systems with Delays 155

4.7 Setting Error Bounds 172

4.8 Model Uncertainty 173

Chapter 5. Remaining Problems and Extensions 176

5.1 Direct Extensions 177

5.2 Hybrid and Sampled Data Systems 179

5.3 Relation to Stochastic Modeling 179

Chapter 6. Scilab Programs 181

6.1 Introduction 181

6.2 Riccati-based Solution 181

6.3 The Block iagonalization Approach 185

6.4 Getting Scilab and the Programs 188

Appendix A. List of Symbols 189

Bibliography 193

Index 201

What People are Saying About This

Frank Lewis

This is the first book I have seen that thoroughly and rigorously addresses an important niche in failure detection.
Frank Lewis, University of Texas, Arlington

Bernard Levy

This book describes the first comprehensive methodology for active failure detection over finite and infinite intervals of observation. The authors are the top researchers in this field, and I anticipate their book will prompt other significant contributions.
Bernard Levy, University of California, Davis

From the Publisher

"This book describes the first comprehensive methodology for active failure detection over finite and infinite intervals of observation. The authors are the top researchers in this field, and I anticipate their book will prompt other significant contributions."—Bernard Levy, University of California, Davis

"This is the first book I have seen that thoroughly and rigorously addresses an important niche in failure detection."—Frank Lewis, University of Texas, Arlington

Recipe

"This book describes the first comprehensive methodology for active failure detection over finite and infinite intervals of observation. The authors are the top researchers in this field, and I anticipate their book will prompt other significant contributions."—Bernard Levy, University of California, Davis

"This is the first book I have seen that thoroughly and rigorously addresses an important niche in failure detection."—Frank Lewis, University of Texas, Arlington

From the B&N Reads Blog

Customer Reviews