Neural and Adaptive Systems: Fundamentals through Simulations / Edition 1

Neural and Adaptive Systems: Fundamentals through Simulations / Edition 1

ISBN-10:
0471351679
ISBN-13:
9780471351672
Pub. Date:
12/21/1999
Publisher:
Wiley
ISBN-10:
0471351679
ISBN-13:
9780471351672
Pub. Date:
12/21/1999
Publisher:
Wiley
Neural and Adaptive Systems: Fundamentals through Simulations / Edition 1

Neural and Adaptive Systems: Fundamentals through Simulations / Edition 1

Paperback

$153.75 Current price is , Original price is $153.75. You
$153.75 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.


Overview

Like no other text in this field, authors Jose C. Principe, Neil R. Euliano, and W. Curt Lefebvre have written a unique and innovative text unifying the concepts of neural networks and adaptive filters into a common framework.

The text is suitable for senior/graduate courses in neural networks and adaptive filters. It offers over 200 fully functional simulations (with instructions) to demonstrate and reinforce key concepts and help the reader develop an intuition about the behavior of adaptive systems with real data. This creates a powerful self-learning environment highly suitable for the professional audience.


Product Details

ISBN-13: 9780471351672
Publisher: Wiley
Publication date: 12/21/1999
Pages: 672
Product dimensions: 7.74(w) x 9.72(h) x 1.18(d)

About the Author

Jose C. Principe, University of Florida.

Neil R. Euliano, NeuroDimension, Inc.

W. Curt Lefebvre, NeuroDimension, Inc.

Read an Excerpt

Chapter 1: Data Fitting With Linear Models

1.1 Introduction

The study of neural and adaptive systems is a unique and growing interdisciplinary field that considers adaptive, distributed, and mostly nonlinear systems-three of the ingredients found in biology. We believe that neural and adaptive systems should be considered another tool in the scientist's and engineer's toolbox. They will effectively complement present engineering design principles and help build the preprocessors to interface with the real world and ensure the optimality needed in complex systems. When applied correctly, a neural or adaptive system may considerably outperform other methods.

Neural and adaptive systems are used in many important engineering applications, such as signal enhancement, noise cancellation, classification of input patterns, system identification, prediction, and control. They are used in many commercial products, such as modems, image-processing and -recognition systems, speech recognition, front-end signal processors, and biomedical instrumentation. We expect that the list will grow exponentially in the near future.

The leading characteristic of neural and adaptive systems is their adaptivity, which brings a totally new system design style (Figure 1-1). Instead of being built a priori from specification, neural and adaptive systems use external data to automatically set their parameters. This means that neural systems are parametric. It also means that they are made "aware" of their output through a performance feedback loop that includes a cost function. The performance feedback is utilized directly to change the parameters through systematic procedures called learning or training rules, so that the system output improves with respect to the desired goal (i.e., the error decreases through training).

The system designer has to specify just a few crucial steps in the overall process: He or she has to decide the system topology, choose a performance criterion, and design the adaptive algorithms. In neural systems the parameters are often modified in a selected set of data called the training set and are fixed during operation. The designer thus has to know how to specify the input and desired response data and when to stop the training phase. In adaptive systems the system parameters are continuously adapted during operation with the current data. We are at a very exciting stage in neural and adaptive system development because

  • We now know some powerful topologies that are able to create universal input/output mappings.

  • We also know how to design general adaptive algorithms to extract information from data and adapt the parameters of the mappers.

  • We are also starting to understand the prerequisites for generalization, that is, how to guarantee that the performance in the training set extends to the data found during system operation.

Therefore we are in a position to design effective adaptive solutions to moderately difficult real-world problems. Because of the practicality derived from these advances, we believe that the time is right to teach adaptive systems in undergraduate engineering and science curricula.

Throughout this textbook we explain the principles that are necessary to make judicious choices about the design options for neural and adaptive systems. The discussion is slanted toward engineering, both in terminology and in perspective. We are very much interested in the engineering model-based approach and in explaining the mathematical principles at work. We center the explanation on concepts from adaptive signal processing, which are rooted in statistics, pattern recognition, and digital signal processing. Moreover, our study is restricted to model building from data.

1.1.1 Engineering Design and Adaptive Systems

Engineering is a discipline that builds physical systems from human dreams, reinventing the physical world around us. In this respect it transcends physics, which has the passive role of explaining the world, and also mathematics, which stops at the edge of physical reality. Engineering design is like a gigantic Lego construction, where each piece is a subsystem grounded in its physical or mathematical principles. The role of the engineer is to develop the blueprint of the dream through specifications and then to look for the pieces that fit the blueprint. Obviously, the pieces cannot be put together at random, since each has its own principles, so it is mandatory that the scientist or the engineer learn the principles attached to each piece and specify the interface. Normally, this study is done using the scientific method. When the system is physical, we use the principles of physics, and when it is software, we use the principles of mathematics.

Development of the Phone System

A good example of engineering design is the telephone system. Long and meticulous research was conducted at Bell Laboratories on human perception of speech. This created the specification for the required bandwidth and noise level for speech intelligibility. Engineers perfected the microphone that would translate the pressure waves into electrical waves to meet the specification. Then these electrical waves were transmitted through copper wires over long distances to a similar device, still preserving the required specification. For increased functionality the freedom of reaching any other telephone was added to the system, so switching of calls had to be implemented. This created the phone system. Initially, the switching among lines was done by operators. Then a machine was invented that would automatically switch the calls. Operators were still used for special services such as directory assistance, but now that the fundamental engineering aspects are stable, we are asking machines to automatically recognize speech and directly assist callers.

The development of the phone system is an excellent example of engineering design. Once we have a vision, we try to understand the principles at work and create specifications and a system architecture. The fundamental principles at work are found by applying the scientific method. The phenomenon under analysis is first studied with physics or mathematics. The importance of models is that they translate general principles, and through deduction we can apply them to particular cases like the ones we are interested in. These disciplines create approximate models of the external world using the principle of divide and conquer. The problem is divided in manageable pieces, each is studied independently of the others, and protocols among the pieces are drawn such that the system can work as a whole, meeting the specifications drawn a priori. This is what engineering design is today.

The scientific method has been highly successful in engineering, but let us evaluate it in broad terms. First, engineering design requires the availability of a model for each subsystem. Second, when the number of pieces increases, the interactions among the subsystems increase exponentially. Fundamental research will continue to provide a steady flux of new physical and mathematical principles (provided the present trend of reduced federal funding for fundamental science is reversed), but the exponential growth of interactions required for larger and more sophisticated systems is harder to control. In fact, at this point in time, we simply do not have a clear vision of how to handle complexity in the long term. But there are two more factors that present major challenges: the autonomous interaction of systems with the environment and the optimality of the design. We will discuss these now.

Humans have traditionally mediated the interaction of engineering systems with the external world. After all, we use technology to reduce our physical constraints, so we have traditionally maintained control of the machines we build. Since the invention of the digital computer, there has been a trend to create machines that interact directly with the external world without a human in the loop. This brings the complexity of the external world directly into engineering design. We are not yet totally prepared for this, because our mathematical and physical theories about the external world are mere approximations-very good approximations in some cases, but rather poor in others. This disturbs the order of engineering design and creates performance problems (the worst subsystem tends to limit the performance of the full system).

Mars Pathfinder Mission

When machines have to autonomously interact with the environment or operate near the optimal set point, we cannot specify all the functions a priori and in a deterministic way. Take, for instance, the Mars Pathfinder mission. It was totally impossible to specify all the possible conditions that the rover Sojourner would face, even if remotely controlled from Earth, so the problem could not be solved by a sequence of instructions determined a priori in JPLs laboratory. The vehicle was given high-level instructions (way points) and was equipped with cameras and laser sensors that would see the terrain. The information from the sensors was analyzed and catalogued in general classes. For each class a procedure was designed to accomplish the goal of moving from point A to point B. This is the type of engineering system we will build more and more of in the future.

The big difference between the initial machines and Sojourner is that the environment is intrinsically in the loop of the machine function. This brings a very different set of problems, because, as we said earlier the environment is complex and unpredictable. If our physical model does not capture the essentials of the environment, errors accumulate over time, and the solution becomes impractical. We thus no longer have the luxury of dictating the rules of the game, as we did in the early machine-building era. It turns out that animals and humans do Sojourner-type tasks effortlessly.

System optimality is also a rising concern to save resources and augment the performance/price ratio. We might think that optimally designing each subsystem would bring global optimality, but this is not always true. Optimal design of complex systems is thus a difficult problem that must also take into consideration the particular type of system function, meaning that the complexity of the environment is once again present. We can conclude that the current challenges faced in engineering are the complexity of the systems, the need for optimal performance, and the autonomous interaction with the environment that will require some form of intelligence. These are the challenges for engineering in the 21st century and beyond.

Whenever there is a new challenge, we should consider new solutions. Quite often the difficulty of a task is also linked to the particular method we are using to find the solution. Is building machines by specification the only way to proceed?

Let us look at living creatures from an engineering systems perspective. The cell is the optimal factory, building directly from the environment at the fundamental molecular level what it needs to carry out its function. The animals we observe today interact efficiently with the environment (otherwise they would not have survived); they work very close to optimality in terms of resources (otherwise they would have been replaced in their niche by more efficient animals); and they certainly are complex. Biology has, in fact, already conquered some of the challenges we face in building engineering Systems, so it is worthwhile to investigate the principles at work.

Biology has found a set of inductive principles that are particularly well tuned to the interaction with a complex and unpredictable environment. These principles are not known explicitly but are being intensively studied in biology, computational neurosciences, statistics, computer science, and engineering. They involve extraction of information from sensor data (feature extraction), efficient learning from data, creation of invariants and representations, and decision making under uncertainty. In a global sense, autonomous agents have to build and fit models to data through their daily experience; they have to store these models, choose which shall be applied in each circumstance, and assess the likelihood of success for a given task. An implicit optimization principle is at play, since the goal is to do the best with the available information and resources.

From a scientific perspective, biology uses adaptation to build optimal system functionality. The anatomical organization of the animal (the wetware) is specified in the long term by the environment (through evolution), and in the short term it is used as a constraint to extract in real time the information that the animal needs to survive. At the nervous-system level, it is well accepted that the interaction with the environment molds the wetware using a learning-from-examples metaphor.

1.1.2 Experimental Model Building

The problem of data fitting is one of the oldest in experimental science. The real world tends to be very complex and unpredictable, and the exact mechanisms that generate the data are often unknown. Moreover, when we collect physical variables, the sensors are not ideal (of finite precision, noisy, with constrained bandwidth, etc.), so the measurements do not represent the real phenomena exactly. One of the quests in science is to estimate the underlying data model.

The importance of inferring a model from the data is to apply mathematical reasoning to the problem. The major advantage of a mathematical model is the ability to understand, explain, predict, and control outcomes in the natural system [Casti, 1989]. Figure 1-2 illustrates the data-modeling process. The most important advantage of the existence of a formal equivalent model is the ability to predict the natural system's behavior at a future time and to control its outputs by applying appropriate inputs.

In this chapter we address the issues of fitting data with linear models, which is called the linear regression problem [Dunteman, 1989]. Notice that we have not specified what the data is, because it is really immaterial. We are seeking relationships between the values of the external (observable) variables of the natural system in Figure 1-2. This methodology can therefore be applied to meteorological data, biological data, financial data, marketing data, engineering data, and so on.

1.1.3 Data Collection

The data-collection phase must be carefully planned to ensure that

  • Data will be sufficient.

  • Data will capture the fundamental principles at work.

  • Data is as free as possible from observation noise.

Table 1-1 presents a data example with two variables (x, d) in tabular form. the measurement x is assumed error free, and d is contaminated by noise. From table 1-1 very little can be said about the data, except that there is a positive trend between the variables (i.e., when x increases d also increases). Our brain is somehow able to extract much more information from pictures than from numbers, so data should first be plotted before performing data analysis. Plotting the data allows verification, assures the researcher that the data was collected correctly, and provides a "feel" for the relationships that exist in the data (e.g., natural trends)....

Table of Contents

Chapter 1 Data Fitting with Linear Models 1

Chapter 2 Pattern Recognition 68

Chapter 3 Multilayer Perceptrons 100

Chapter 4 Designing and Training MLPS 173

Chapter 5 Function Approximation with MLPs, Radial Basis Functions, and Support Vector Machines 223

Chapter 6 Hebbian Learning and Principal Component Analysis 279

Chapter 7 Competitive and Kohonen Networks 333

Chapter 8 Principles of Digital Signal Processing 364

Chapter 9 Adaptive Filters 429

Chapter 10 Temporal Processing with Neural Networks 473

Chapter 11 Training and Using Recurrent Networks 525

Appendix A Elements of Linear Algebra and Pattern Recognition 589

Appendix B NeuroSolutions Tutorial 613

Appendix C Data Directory 637

Glossary 639

Index 647

From the B&N Reads Blog

Customer Reviews